Freemarker Debugger framework usage example - debugging

I have started working on a Freemarker Debugger using breakpoints etc. The supplied framework is based on java RMI. So far I get it to suspend at one breakpoint but then ... nothing.
Is there a very basic example setup for the serverpart and the client part other then the debug/imp classes supplied with the sources. That would be of great help.
this is my server class:
class DebuggerServer {
private final int port;
private final String templateName1;
private final Environment templateEnv;
private boolean stop = false;
public DebuggerServer(String templateName) throws IOException {
System.setProperty("freemarker.debug.password", "hello");
port = SecurityUtilities.getSystemProperty("freemarker.debug.port", Debugger.DEFAULT_PORT).intValue();
System.setProperty("freemarker.debug.password", "hello");
Configuration cfg = new Configuration();
// Some other recommended settings:
cfg.setIncompatibleImprovements(new Version(2, 3, 20));
cfg.setDefaultEncoding("UTF-8");
cfg.setLocale(Locale.US);
cfg.setTemplateExceptionHandler(TemplateExceptionHandler.RETHROW_HANDLER);
Template template = cfg.getTemplate(templateName);
templateName1 = template.getName();
System.out.println("Debugging " + templateName1);
Map<String, Object> root = new HashMap();
Writer consoleWriter = new OutputStreamWriter(System.out);
templateEnv = new Environment(template, null, consoleWriter);
DebuggerService.registerTemplate(template);
}
public void start() {
new Thread(new Runnable() {
#Override
public void run() {
startInternal();
}
}, "FreeMarker Debugger Server Acceptor").start();
}
private void startInternal() {
boolean handled = false;
while (!stop) {
List breakPoints = DebuggerService.getBreakpoints(templateName1);
for (int i = 0; i < breakPoints.size(); i++) {
try {
Breakpoint bp = (Breakpoint) breakPoints.get(i);
handled = DebuggerService.suspendEnvironment(templateEnv, templateName1, bp.getLine());
} catch (RemoteException e) {
System.err.println(e.getMessage());
}
}
}
}
public void stop() {
this.stop = true;
}
}
This is the client class:
class DebuggerClientHandler {
private final Debugger client;
private boolean stop = false;
public DebuggerClientHandler(String templateName) throws IOException {
// System.setProperty("freemarker.debug.password", "hello");
// System.setProperty("java.rmi.server.hostname", "192.168.2.160");
client = DebuggerClient.getDebugger(InetAddress.getByName("localhost"), Debugger.DEFAULT_PORT, "hello");
client.addDebuggerListener(environmentSuspendedEvent -> {
System.out.println("Break " + environmentSuspendedEvent.getName() + " at line " + environmentSuspendedEvent.getLine());
// environmentSuspendedEvent.getEnvironment().resume();
});
}
public void start() {
new Thread(new Runnable() {
#Override
public void run() {
startInternal();
}
}, "FreeMarker Debugger Server").start();
}
private void startInternal() {
while (!stop) {
}
}
public void stop() {
this.stop = true;
}
public void addBreakPoint(String s, int i) throws RemoteException {
Breakpoint bp = new Breakpoint(s, i);
List breakpoints = client.getBreakpoints();
client.addBreakpoint(bp);
}
}

Liferay IDE (https://github.com/liferay/liferay-ide) has FreeMarker template debug support (https://issues.liferay.com/browse/IDE-976), so somehow they managed to use it. I have never seen it in action though. Other than that, I'm not aware of anything that uses the debug API.

Related

How to start multiple boot apps for end-to-end tests?

I'd like to write end-to-end tests to validate two boot apps work well together with various profiles.
What already works:
create a third maven module (e2e) for end-to-end tests, in addition to the two tested apps (authorization-server and resource-server)
write tests using TestResTemplate
Test work fine if I start authorization-server and resource-server manually.
What I now want to do is automate the tested boot apps startup and shutdown with the right profiles for each test.
I tried:
adding maven dependencies to tested apps in e2e module
using SpringApplication in new threads for each app to start
But I face miss-configuration issues as all resources and dependencies end in the same shared classpath...
Is there a way to sort this out?
I'm also considering starting two separate java -jar ... processes, but then, how to ensure tested apps fat-jars are built before 2e2 unit-tests run?
Current app start/shutdown code sample which fails as soon as I had maven dependency to second app to start:
private Service startAuthorizationServer(boolean isJwtActive) throws InterruptedException {
return new Service(
AuthorizationServer.class,
isJwtActive ? new String[]{ "jwt" } : new String[]{} );
}
private static final class Service {
private ConfigurableApplicationContext context;
private final Thread thread;
public Service(Class<?> appClass, String... profiles) throws InterruptedException {
thread = new Thread(() -> {
SpringApplication app = new SpringApplicationBuilder(appClass).profiles(profiles).build();
context = app.run();
});
thread.setDaemon(false);
thread.start();
while (context == null || !context.isRunning()) {
Thread.sleep(1000);
};
}
#PreDestroy
public void stop() {
if (context != null) {
SpringApplication.exit(context);
}
if (thread != null) {
thread.interrupt();
}
}
}
I think your case, running the two applications via a docker compose can be a good idea.
This article shows how you can set up some integration tests using a docker compose image: https://blog.codecentric.de/en/2017/03/writing-integration-tests-docker-compose-junit/
Also, take a look at this post from Martin Fowler: https://martinfowler.com/articles/microservice-testing/
I got things working with second solution:
end-to-end tests projects has no other maven dependency than what is required to run spring-tests with TestRestClient
test config initialises environment, running mvn packageon required modules in separate processes
test cases run (re)start apps with chosen profiles in separate java -jar ... processes
Here is the helper class I wrote for this (taken from there):
class ActuatorApp {
private final int port;
private final String actuatorEndpoint;
private final File jarFile;
private final TestRestTemplate actuatorClient;
private Process process;
private ActuatorApp(File jarFile, int port, TestRestTemplate actuatorClient) {
this.port = port;
this.actuatorEndpoint = getBaseUri() + "actuator/";
this.actuatorClient = actuatorClient;
this.jarFile = jarFile;
Assert.isTrue(jarFile.exists(), jarFile.getAbsolutePath() + " does not exist");
}
public void start(List<String> profiles, List<String> additionalArgs) throws InterruptedException, IOException {
if (isUp()) {
stop();
}
this.process = Runtime.getRuntime().exec(appStartCmd(jarFile, profiles, additionalArgs));
Executors.newSingleThreadExecutor().submit(new ProcessStdOutPrinter(process));
for (int i = 0; i < 10 && !isUp(); ++i) {
Thread.sleep(5000);
}
}
public void start(String... profiles) throws InterruptedException, IOException {
this.start(Arrays.asList(profiles), List.of());
}
public void stop() throws InterruptedException {
if (isUp()) {
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON_UTF8);
headers.setAccept(List.of(MediaType.APPLICATION_JSON_UTF8));
actuatorClient.postForEntity(actuatorEndpoint + "shutdown", new HttpEntity<>(headers), Object.class);
Thread.sleep(5000);
}
if (process != null) {
process.destroy();
}
}
private String[] appStartCmd(File jarFile, List<String> profiles, List<String> additionalArgs) {
final List<String> cmd = new ArrayList<>(
List.of(
"java",
"-jar",
jarFile.getAbsolutePath(),
"--server.port=" + port,
"--management.endpoint.heath.enabled=true",
"--management.endpoint.shutdown.enabled=true",
"--management.endpoints.web.exposure.include=*",
"--management.endpoints.web.base-path=/actuator"));
if (profiles.size() > 0) {
cmd.add("--spring.profiles.active=" + profiles.stream().collect(Collectors.joining(",")));
}
if (additionalArgs != null) {
cmd.addAll(additionalArgs);
}
return cmd.toArray(new String[0]);
}
private boolean isUp() {
try {
final ResponseEntity<HealthResponse> response =
actuatorClient.getForEntity(actuatorEndpoint + "health", HealthResponse.class);
return response.getStatusCode().is2xxSuccessful() && response.getBody().getStatus().equals("UP");
} catch (ResourceAccessException e) {
return false;
}
}
public static Builder builder(String moduleName, String moduleVersion) {
return new Builder(moduleName, moduleVersion);
}
/**
* Configure and build a spring-boot app
*
* #author Ch4mp
*
*/
public static class Builder {
private String moduleParentDirectory = "..";
private final String moduleName;
private final String moduleVersion;
private int port = SocketUtils.findAvailableTcpPort(8080);
private String actuatorClientId = "actuator";
private String actuatorClientSecret = "secret";
public Builder(String moduleName, String moduleVersion) {
this.moduleName = moduleName;
this.moduleVersion = moduleVersion;
}
public Builder moduleParentDirectory(String moduleParentDirectory) {
this.moduleParentDirectory = moduleParentDirectory;
return this;
}
public Builder port(int port) {
this.port = port;
return this;
}
public Builder actuatorClientId(String actuatorClientId) {
this.actuatorClientId = actuatorClientId;
return this;
}
public Builder actuatorClientSecret(String actuatorClientSecret) {
this.actuatorClientSecret = actuatorClientSecret;
return this;
}
/**
* Ensures the app module is found and packaged
* #return app ready to be started
* #throws IOException if module packaging throws one
* #throws InterruptedException if module packaging throws one
*/
public ActuatorApp build() throws IOException, InterruptedException {
final File moduleDir = new File(moduleParentDirectory, moduleName);
packageModule(moduleDir);
final File jarFile = new File(new File(moduleDir, "target"), moduleName + "-" + moduleVersion + ".jar");
return new ActuatorApp(jarFile, port, new TestRestTemplate(actuatorClientId, actuatorClientSecret));
}
private void packageModule(File moduleDir) throws IOException, InterruptedException {
Assert.isTrue(moduleDir.exists(), "could not find module. " + moduleDir + " does not exist.");
String[] cmd = new File(moduleDir, "pom.xml").exists() ?
new String[] { "mvn", "-DskipTests=true", "package" } :
new String[] { "./gradlew", "bootJar" };
Process mvnProcess = new ProcessBuilder().directory(moduleDir).command(cmd).start();
Executors.newSingleThreadExecutor().submit(new ProcessStdOutPrinter(mvnProcess));
Assert.isTrue(mvnProcess.waitFor() == 0, "module packaging exited with error status.");
}
}
private static class ProcessStdOutPrinter implements Runnable {
private InputStream inputStream;
public ProcessStdOutPrinter(Process process) {
this.inputStream = process.getInputStream();
}
#Override
public void run() {
new BufferedReader(new InputStreamReader(inputStream)).lines().forEach(System.out::println);
}
}
public String getBaseUri() {
return "https://localhost:" + port;
}
}

Building a Future API on top of Netty

I want to build an API based on Futures (from java.util.concurrent) that is powered by a custom protocol on top of Netty (version 4). Basic idea is to write a simple library that would abstract the underlying Netty implementation and make it easier to make requests.
Using this library, one should be able to write something like this:
Request req = new Request(...);
Future<Response> responseFuture = new ServerIFace(host, port).call(req);
// For example, let's block until this future is resolved
Reponse res = responseFuture.get().getResult();
Underneath this code, a Netty client is connected
public class ServerIFace {
private Bootstrap bootstrap;
private EventLoopGroup workerGroup;
private String host;
private int port;
public ServerIFace(String host, int port) {
this.host = host;
this.port = port;
this.workerGroup = new NioEventLoopGroup();
bootstrap();
}
private void bootstrap() {
bootstrap = new Bootstrap();
bootstrap.group(workerGroup);
bootstrap.channel(NioSocketChannel.class);
bootstrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ObjectEncoder());
ch.pipeline().addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(Response.class.getClassLoader())));
ch.pipeline().addLast("response", new ResponseReceiverChannelHandler());
}
});
}
public Future<Response> call(final Request request) throws InterruptedException {
CompletableFuture<Response> responseFuture = new CompletableFuture<>();
Channel ch = bootstrap.connect(host, port).sync().channel();
ch.writeAndFlush(request).addListener((f) -> {
if (f.isSuccess()) {
System.out.println("Wrote successfully");
} else {
f.cause().printStackTrace();
}
});
ChannelFuture closeFuture = ch.closeFuture();
// Have to 'convert' ChannelFuture to java.util.concurrent.Future
closeFuture.addListener((f) -> {
if (f.isSuccess()) {
// How to get this response?
Response response = ((ResponseReceiverChannelHandler) ch.pipeline().get("response")).getResponse();
responseFuture.complete(response);
} else {
f.cause().printStackTrace();
responseFuture.cancel(true);
}
ch.close();
}).sync();
return responseFuture;
}
}
Now, as you can see, in order to abstract Netty's inner ChannelFuture, I have to 'convert' it to Java's Future (I'm aware that ChannelFuture is derived from Future, but that information doesn't seem useful at this point).
Right now, I'm capturing this Response object in the last handler of my inbound part of the client pipeline, the ResponseReceiverChannelHandler.
public class ResponseReceiverChannelHandler extends ChannelInboundHandlerAdapter {
private Response response = null;
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
this.response = (Response)msg;
ctx.close();
}
public Response getResponse() {
return response;
}
}
Since I'm new to Netty and these things in general, I'm looking for a cleaner, thread-safe way of delivering this object to the API user.
Correct me if I'm wrong, but none of the Netty examples show how to achieve this, and most of the Client examples just print out whatever they get from Server.
Please note that my main goal here is to learn more about Netty, and that this code has no production purposes.
For the reference (although I don't think it's that relevant) here's the Server code.
public class Server {
public static class RequestProcessorHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ChannelFuture future;
if (msg instanceof Request) {
Request req = (Request)msg;
Response res = some function of req
future = ctx.writeAndFlush(res);
} else {
future = ctx.writeAndFlush("Error, not a request!");
}
future.addListener((f) -> {
if (f.isSuccess()) {
System.out.println("Response sent!");
} else {
System.out.println("Response not sent!");
f.cause().printStackTrace();
}
});
}
}
public int port;
public Server(int port) {
this.port = port;
}
public void run() throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(Request.class.getClassLoader())));
ch.pipeline().addLast(new ObjectEncoder());
// Not really shutting down this threadpool but it's ok for now
ch.pipeline().addLast(new DefaultEventExecutorGroup(2), new RequestProcessorHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture f = b.bind(port).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
int port;
if (args.length > 0) {
port = Integer.parseInt(args[0]);
} else {
port = 8080;
}
new Server(port).run();
}
}

Spark-Streaming CustomReceiver Unknown Host Exception

I am new to spark streaming. I want to stream a url online in order to retrieve info from a certain URL, I used the JavaCustomReceiver in order to stream a url.
This is the code I'm using (source)
public class JavaCustomReceiver extends Receiver<String> {
private static final Pattern SPACE = Pattern.compile(" ");
public static void main(String[] args) throws Exception {
SparkConf sparkConf = new SparkConf().setAppName("JavaCustomReceiver");
JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, new Duration(1000));
JavaReceiverInputDStream<String> lines = ssc.receiverStream(
new JavaCustomReceiver("http://stream.meetup.com/2/rsvps", 80));
JavaDStream<String> words = lines.flatMap(new
FlatMapFunction<String, String>() {
#Override
public Iterator<String> call(String x) {
return Arrays.asList(SPACE.split(x)).iterator();
}
});
JavaPairDStream<String, Integer> wordCounts = words.mapToPair(
new PairFunction<String, String, Integer>() {
#Override
public Tuple2<String, Integer> call(String s) {
return new Tuple2<>(s, 1);
}
}).reduceByKey(new Function2<Integer, Integer, Integer>() {
#Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
wordCounts.print();
ssc.start();
ssc.awaitTermination();
}
String host = null;
int port = -1;
public JavaCustomReceiver(String host_, int port_) {
super(StorageLevel.MEMORY_AND_DISK_2());
host = host_;
port = port_;
}
public void onStart() {
new Thread() {
#Override
public void run() {
receive();
}
}.start();
}
public void onStop() {
}
private void receive() {
try {
Socket socket = null;
BufferedReader reader = null;
String userInput = null;
try {
// connect to the server
socket = new Socket(host, port);
reader = new BufferedReader(
new InputStreamReader(socket.getInputStream(), StandardCharsets.UTF_8));
// Until stopped or connection broken continue reading
while (!isStopped() && (userInput = reader.readLine()) != null) {
System.out.println("Received data '" + userInput + "'");
store(userInput);
}
} finally {
Closeables.close(reader, /* swallowIOException = */ true);
Closeables.close(socket, /* swallowIOException = */ true);
}
restart("Trying to connect again");
} catch (ConnectException ce) {
// restart if could not connect to server
restart("Could not connect", ce);
} catch (Throwable t) {
restart("Error receiving data", t);
}
}
}
However, I keep getting a java.net.UnknownHostException
How can I fix this? What is wrong with the code that I'm using ?
After reading the code of the custom receiver referenced, it is clear that it is a TCP receiver that connects to a host:port and not an HTTP receiver that could take an URL. You'll have to change the code to read from an HTTP endpoint.

Netty performs slow at writing

Netty 4.1 (on OpenJDK 1.6.0_32 and CentOS 6.4) message sending is strangely slow. According to the profiler, it is the DefaultChannelHandlerContext.writeAndFlush that makes the biggest percentage (60%) of the running time. Decoding process is not emphasized in the profiler. Small messages are being processed and maybe the bootstrap options are not set correctly (TCP_NODELAY is true and nothing improved)? DefaultEventExecutorGroup is used both in server and client to avoid blocking Netty's main event loop and to run 'ServerData' and 'ClientData' classes with business logic and sending of the messages is done from there through context.writeAndFlush(...). Is there a more proper/faster way? Using straight ByteBuf.writeBytes(..) serialization in the encoder and ReplayingDecoder in the decoder made no difference in encoding speed. Sorry for the lengthy code, neither 'Netty In Action' book nor the documentation helped.
JProfiler's call tree of the client side: http://i62.tinypic.com/dw4e43.jpg
The server class is:
public class NettyServer
{
EventLoopGroup incomingLoopGroup = null;
EventLoopGroup workerLoopGroup = null;
ServerBootstrap serverBootstrap = null;
int port;
DataServer dataServer = null;
DefaultEventExecutorGroup dataEventExecutorGroup = null;
DefaultEventExecutorGroup dataEventExecutorGroup2 = null;
public ChannelFuture serverChannelFuture = null;
public NettyServer(int port)
{
this.port = port;
DataServer = new DataServer(this);
}
public void run() throws Exception
{
incomingLoopGroup = new NioEventLoopGroup();
workerLoopGroup = new NioEventLoopGroup();
dataEventExecutorGroup = new DefaultEventExecutorGroup(5);
dataEventExecutorGroup2 = new DefaultEventExecutorGroup(5);
try
{
ChannelInitializer<SocketChannel> channelInitializer =
new ChannelInitializer<SocketChannel>()
{
#Override
protected void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new MessageByteDecoder());
ch.pipeline().addLast(new MessageByteEncoder());
ch.pipeline().addLast(dataEventExecutorGroup, new DataServerInboundHandler(DataServer, NettyServer.this));
ch.pipeline().addLast(dataEventExecutorGroup2, new DataServerDataHandler(DataServer));
}
};
// bootstrap the server
serverBootstrap = new ServerBootstrap();
serverBootstrap.group(incomingLoopGroup, workerLoopGroup)
.channel(NioServerSocketChannel.class)
.childHandler(channelInitializer)
.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.option(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 32 * 1024)
.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 8 * 1024)
.childOption(ChannelOption.SO_KEEPALIVE, true);
serverChannelFuture = serverBootstrap.bind(port).sync();
serverChannelFuture.channel().closeFuture().sync();
}
finally
{
incomingLoopGroup.shutdownGracefully();
workerLoopGroup.shutdownGracefully();
}
}
}
The client class:
public class NettyClient
{
Bootstrap clientBootstrap = null;
EventLoopGroup workerLoopGroup = null;
String serverHost = null;
int serverPort = -1;
ChannelFuture clientFutureChannel = null;
DataClient dataClient = null;
DefaultEventExecutorGroup dataEventExecutorGroup = new DefaultEventExecutorGroup(5);
DefaultEventExecutorGroup dataEventExecutorGroup2 = new DefaultEventExecutorGroup(5);
public NettyClient(String serverHost, int serverPort)
{
this.serverHost = serverHost;
this.serverPort = serverPort;
}
public void run() throws Exception
{
workerLoopGroup = new NioEventLoopGroup();
try
{
this.dataClient = new DataClient();
ChannelInitializer<SocketChannel> channelInitializer =
new ChannelInitializer<SocketChannel>()
{
#Override
protected void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new MessageByteDecoder());
ch.pipeline().addLast(new MessageByteEncoder());
ch.pipeline().addLast(dataEventExecutorGroup, new ClientInboundHandler(dataClient, NettyClient.this)); ch.pipeline().addLast(dataEventExecutorGroup2, new ClientDataHandler(dataClient));
}
};
clientBootstrap = new Bootstrap();
clientBootstrap.group(workerLoopGroup);
clientBootstrap.channel(NioSocketChannel.class);
clientBootstrap.option(ChannelOption.SO_KEEPALIVE, true);
clientBootstrap.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
clientBootstrap.option(ChannelOption.TCP_NODELAY, true);
clientBootstrap.option(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 32 * 1024);
clientBootstrap.option(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 8 * 1024);
clientBootstrap.handler(channelInitializer);
clientFutureChannel = clientBootstrap.connect(serverHost, serverPort).sync();
clientFutureChannel.channel().closeFuture().sync();
}
finally
{
workerLoopGroup.shutdownGracefully();
}
}
}
The message class:
public class Message implements Serializable
{
public static final byte MSG_FIELD = 0;
public static final byte MSG_HELLO = 1;
public static final byte MSG_LOG = 2;
public static final byte MSG_FIELD_RESPONSE = 3;
public static final byte MSG_MAP_KEY_VALUE = 4;
public static final byte MSG_STATS_FILE = 5;
public static final byte MSG_SHUTDOWN = 6;
public byte msgID;
public byte msgType;
public String key;
public String value;
public byte method;
public byte id;
}
The decoder:
public class MessageByteDecoder extends ByteToMessageDecoder
{
private Kryo kryoCodec = new Kryo();
private int contentSize = 0;
#Override
protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> out) //throws Exception
{
if (!buffer.isReadable() || buffer.readableBytes() < 4) // we need at least integer
return;
// read header
if (contentSize == 0) {
contentSize = buffer.readInt();
}
if (buffer.readableBytes() < contentSize)
return;
// read content
byte [] buf = new byte[contentSize];
buffer.readBytes(buf);
Input in = new Input(buf, 0, buf.length);
out.add(kryoCodec.readObject(in, Message.class));
contentSize = 0;
}
}
The encoder:
public class MessageByteEncoder extends MessageToByteEncoder<Message>
{
Kryo kryoCodec = new Kryo();
public MessageByteEncoder()
{
super(false);
}
#Override
protected void encode(ChannelHandlerContext ctx, Message msg, ByteBuf out) throws Exception
{
int offset = out.arrayOffset() + out.writerIndex();
byte [] inArray = out.array();
Output kryoOutput = new OutputWithOffset(inArray, inArray.length, offset + 4);
// serialize message content
kryoCodec.writeObject(kryoOutput, msg);
// write length of the message content at the beginning of the array
out.writeInt(kryoOutput.position());
out.writerIndex(out.writerIndex() + kryoOutput.position());
}
}
Client's business logic run in DefaultEventExecutorGroup:
public class DataClient
{
ChannelHandlerContext ctx;
// ...
public void processData()
{
// ...
while ((line = br.readLine()) != null)
{
// ...
process = new CountDownLatch(columns.size());
for(Column c : columns)
{
// sending column data to the server for processing
ctx.channel().eventLoop().execute(new Runnable() {
#Override
public void run() {
ctx.writeAndFlush(Message.createMessage(msgID, processID, c.key, c.value));
}});
}
// block until all the processed column fields of this row are returned from the server
process.await();
// write processed line to file ...
}
// ...
}
// ...
}
Client's message handling:
public class ClientInboundHandler extends ChannelInboundHandlerAdapter
{
DataClient dataClient = null;
// ...
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg)
{
// dispatch the message to the listeners
Message m = (Message) msg;
switch(m.msgType)
{
case Message.MSG_FIELD_RESPONSE: // message with processed data is received from the server
// decreases the 'process' CountDownLatch in the processData() method
dataClient.setProcessingResult(m.msgID, m.value);
break;
// ...
}
// forward the message to the pipeline
ctx.fireChannelRead(msg);
}
// ...
}
}
Server's message handling:
public class ServerInboundHandler extends ChannelInboundHandlerAdapter
{
private DataServer dataServer = null;
// ...
#Override
public void channelRead(ChannelHandlerContext ctx, Object obj) throws Exception
{
Message msg = (Message) obj;
switch(msg.msgType)
{
case Message.MSG_FIELD:
dataServer.processField(msg, ctx);
break;
// ...
}
ctx.fireChannelRead(msg);
}
//...
}
Server's business logic run in DefaultEventExecutorGroup:
public class DataServer
{
// ...
public void processField(final Message msg, final ChannelHandlerContext context)
{
context.executor().submit(new Runnable()
{
#Override
public void run()
{
String processedValue = (String) processField(msg.key, msg.value);
final Message responseToClient = Message.createResponseFieldMessage(msg.msgID, processedValue);
// send processed data to the client
context.channel().eventLoop().submit(new Runnable(){
#Override
public void run() {
context.writeAndFlush(responseToClient);
}
});
}
});
}
// ...
}
Please try using CentOS 7.0.
I've had similar problem:
The same Netty 4 program runs very fast on CentOS 7.0 (about 40k msg/s), but can't write more than about 8k msg/s on CentOS 6.3 and 6.5 (I haven't tried 6.4).
There is no need to submit stuff to the EventLoop. Just call Channel.writeAndFlush(...) directly in your DataClient and DataServer.

How to instantiate a WebSocketAdapter instance for Jetty websockets

I followed the example to create a websocket server:
Server server = new Server();
ServerConnector connector = new ServerConnector(server);
connector.setPort(port);
server.addConnector(connector);
ServletContextHandler servletContextHandler = new ServletContextHandler(server, "/", true, false);
EventServlet es = injector.getInstance(EventServlet.class);
servletContextHandler.addServlet(new ServletHolder(es), "/events/*");
The EventServlet class looks like:
public class EventServlet extends WebSocketServlet {
#Override
public void configure(WebSocketServletFactory factory) {
factory.getPolicy().setIdleTimeout(10000);
factory.register(EventSocketCache.class);
}
}
The EventSocketCache looks like:
public class EventSocketCache extends WebSocketAdapter {
private static int i = 0;
private static int counter = 0;
private static Map<Integer, Session> sessionMap = new HashMap<>();
private final Cache<String, String> testCache;
#Inject
public EventSocketCache(Cache<String, String> testCache) {
this.testCache = testCache;
}
#Override
public void onWebSocketConnect(Session session) {
super.onWebSocketConnect(session);
System.out.println("Socket Connected: " + session);
System.out.println("Connect: " + session.getRemoteAddress().getAddress());
try {
session.getRemote().sendString("Hello Webbrowser");
session.setIdleTimeout(50000);
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void onWebSocketText(String message) {
super.onWebSocketText(message);
System.out.println("Received TEXT message: " + message);
}
#Override
public void onWebSocketBinary(byte[] payload, int offset, int len) {
byte[] newData = Arrays.copyOfRange(payload, offset, offset + len);
try {
Common.Success success = Common.Success.parseFrom(newData);
System.err.println("------> " + success.getIsSuccess());
} catch (InvalidProtocolBufferException e) {
e.printStackTrace();
}
}
#Override
public void onWebSocketClose(int statusCode, String reason) {
System.err.println("^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^");
// Remove from the list here....
super.onWebSocketClose(statusCode, reason);
System.out.println("Socket Closed: [" + statusCode + "] " + reason);
}
#Override
public void onWebSocketError(Throwable cause) {
System.err.println("######################################");
super.onWebSocketError(cause);
cause.printStackTrace(System.err);
}
}
Now when I use my client and send a request, I end up getting:
org.eclipse.jetty.websocket.api.UpgradeException: Didn't switch protocols
at org.eclipse.jetty.websocket.client.io.UpgradeConnection.validateResponse(UpgradeConnection.java:249)
at org.eclipse.jetty.websocket.client.io.UpgradeConnection.read(UpgradeConnection.java:181)
at org.eclipse.jetty.websocket.client.io.UpgradeConnection.onFillable(UpgradeConnection.java:126)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.run(AbstractConnection.java:358)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:596)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:527)
at java.lang.Thread.run(Thread.java:722)
Disconnected from the target VM, address: '127.0.0.1:63256', transport: 'socket'
java.util.concurrent.ExecutionException: org.eclipse.jetty.websocket.api.UpgradeException: Didn't switch protocols
at org.eclipse.jetty.util.FuturePromise.get(FuturePromise.java:123)
at com.gamecenter.websockets.EventClient.main(EventClient.java:25)
Caused by: org.eclipse.jetty.websocket.api.UpgradeException: Didn't switch protocols
at org.eclipse.jetty.websocket.client.io.UpgradeConnection.validateResponse(UpgradeConnection.java:249)
at org.eclipse.jetty.websocket.client.io.UpgradeConnection.read(UpgradeConnection.java:181)
at org.eclipse.jetty.websocket.client.io.UpgradeConnection.onFillable(UpgradeConnection.java:126)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.run(AbstractConnection.java:358)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:596)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:527)
at java.lang.Thread.run(Thread.java:722)
It seems as there is a problem creating an instance of EventSocketCache; if I don't have the constructor in there, everything works fine.
I'd like to know how to instantiate EventSocketCache properly and register it with EventServlet so things work?
I guess I've found a solution for your problem. You have to use a WebSocketCreator in your WebSocketServlet:
public class MenuServlet extends WebSocketServlet {
#Inject
private Injector injector;
#Override
public void configure(WebSocketServletFactory webSocketServletFactory) {
// Register your Adapater
webSocketServletFactory.register(MenuSocket.class);
// Get the current creator (for reuse)
final WebSocketCreator creator = webSocketServletFactory.getCreator();
// Set your custom Creator
webSocketServletFactory.setCreator(new WebSocketCreator() {
#Override
public Object createWebSocket(ServletUpgradeRequest servletUpgradeRequest, ServletUpgradeResponse servletUpgradeResponse) {
Object webSocket = creator.createWebSocket(servletUpgradeRequest, servletUpgradeResponse);
// Use the object created by the default creator and inject your members
injector.injectMembers(webSocket);
return webSocket;
}
});
}
}
there you can inject your members into your WebSocketAdapater. This actually worked for me.

Resources