I have started working on a Freemarker Debugger using breakpoints etc. The supplied framework is based on java RMI. So far I get it to suspend at one breakpoint but then ... nothing.
Is there a very basic example setup for the serverpart and the client part other then the debug/imp classes supplied with the sources. That would be of great help.
this is my server class:
class DebuggerServer {
private final int port;
private final String templateName1;
private final Environment templateEnv;
private boolean stop = false;
public DebuggerServer(String templateName) throws IOException {
System.setProperty("freemarker.debug.password", "hello");
port = SecurityUtilities.getSystemProperty("freemarker.debug.port", Debugger.DEFAULT_PORT).intValue();
System.setProperty("freemarker.debug.password", "hello");
Configuration cfg = new Configuration();
// Some other recommended settings:
cfg.setIncompatibleImprovements(new Version(2, 3, 20));
cfg.setDefaultEncoding("UTF-8");
cfg.setLocale(Locale.US);
cfg.setTemplateExceptionHandler(TemplateExceptionHandler.RETHROW_HANDLER);
Template template = cfg.getTemplate(templateName);
templateName1 = template.getName();
System.out.println("Debugging " + templateName1);
Map<String, Object> root = new HashMap();
Writer consoleWriter = new OutputStreamWriter(System.out);
templateEnv = new Environment(template, null, consoleWriter);
DebuggerService.registerTemplate(template);
}
public void start() {
new Thread(new Runnable() {
#Override
public void run() {
startInternal();
}
}, "FreeMarker Debugger Server Acceptor").start();
}
private void startInternal() {
boolean handled = false;
while (!stop) {
List breakPoints = DebuggerService.getBreakpoints(templateName1);
for (int i = 0; i < breakPoints.size(); i++) {
try {
Breakpoint bp = (Breakpoint) breakPoints.get(i);
handled = DebuggerService.suspendEnvironment(templateEnv, templateName1, bp.getLine());
} catch (RemoteException e) {
System.err.println(e.getMessage());
}
}
}
}
public void stop() {
this.stop = true;
}
}
This is the client class:
class DebuggerClientHandler {
private final Debugger client;
private boolean stop = false;
public DebuggerClientHandler(String templateName) throws IOException {
// System.setProperty("freemarker.debug.password", "hello");
// System.setProperty("java.rmi.server.hostname", "192.168.2.160");
client = DebuggerClient.getDebugger(InetAddress.getByName("localhost"), Debugger.DEFAULT_PORT, "hello");
client.addDebuggerListener(environmentSuspendedEvent -> {
System.out.println("Break " + environmentSuspendedEvent.getName() + " at line " + environmentSuspendedEvent.getLine());
// environmentSuspendedEvent.getEnvironment().resume();
});
}
public void start() {
new Thread(new Runnable() {
#Override
public void run() {
startInternal();
}
}, "FreeMarker Debugger Server").start();
}
private void startInternal() {
while (!stop) {
}
}
public void stop() {
this.stop = true;
}
public void addBreakPoint(String s, int i) throws RemoteException {
Breakpoint bp = new Breakpoint(s, i);
List breakpoints = client.getBreakpoints();
client.addBreakpoint(bp);
}
}
Liferay IDE (https://github.com/liferay/liferay-ide) has FreeMarker template debug support (https://issues.liferay.com/browse/IDE-976), so somehow they managed to use it. I have never seen it in action though. Other than that, I'm not aware of anything that uses the debug API.
I want to hook all method of ViewGroup, so write below code:
final Class<?> mViewGroup = XposedHelpers.findClass("android.view.ViewGroup", lpparam.classLoader);
for (final Method method : mViewGroup.getDeclaredMethods()) {
if (true == Modifier.isAbstract(method.getModifiers())){
XposedBridge.log("skip abstract:" + method.getName());
continue;
}
XposedBridge.log("----" + method.getName());
XposedBridge.hookMethod(method, methodHook);
}
final StringBuilder sb = new StringBuilder();
XC_MethodHook methodHook = new XC_MethodHook() {
#Override
protected void beforeHookedMethod(XC_MethodHook.MethodHookParam param) throws Throwable {
dumpParams(param);
}
#Override
protected void afterHookedMethod(XC_MethodHook.MethodHookParam param) throws Throwable {
//param.setResult(false);
XposedBridge.log("after:" + param.getResult());
}
};
private void dumpParams(XC_MethodHook.MethodHookParam param) {
sb.setLength(0);
sb.append(param.method.getName()).append("(");
for (Object o:param.args) {
String typnam = "";
String value = "null";
if (o != null) {
typnam = o.getClass().getName();
value = o.toString();
}
sb.append(typnam).append(":").append(value).append(", ");
}
XposedBridge.log(sb.toString());
}
But when I run it, hook class sounds fine, but when execute it crashed at android.view.View$AttachInfo:
am_crash: `java.lang.IllegalAccessError,Illegal class access: 'EdHooker45' attempting to access 'android.view.View$AttachInfo' (declaration of 'EdHooker45' appears in /data/user_de/0/.../1129433416.jar),NULL,120]`
I am trying to do a nested null check and then clear the values in map in the nested object if the map is not null.
The following is my hypothetical code. I am wondering if this is the right way to do it or is there a more elegant solution to this.
package exp.myJavaLab.Experiments;
import java.util.*;
public class OptionalTest {
public Inner inner;
public static void main(String[] args) {
OptionalTest testObj = new OptionalTest();
Pojo pojo1 = new Pojo();
pojo1.id = 1;
Map<String, String> dataMap = new HashMap<>();
dataMap.put("a","b");
pojo1.dataMap = dataMap;
Pojo pojo2 = new Pojo();
pojo2.id = 2;
Inner inner = new Inner();
inner.pojoList = Arrays.asList(pojo1, pojo2);
testObj.inner = inner;
System.out.println(testObj);
Optional.ofNullable(testObj.inner)
.map(Inner::getPojoList)
.ifPresent(pojos -> pojos.forEach(pojo -> {
if (pojo.getDataMap() != null) {
pojo.getDataMap().clear();
}
}));
System.out.println(testObj);
}
#Override
public String toString() {
final StringBuilder sb = new StringBuilder("OptionalTest{");
sb.append("inner=").append(inner);
sb.append('}');
return sb.toString();
}
}
class Inner {
public List<Pojo> pojoList;
public List<Pojo> getPojoList() {
return pojoList;
}
#Override
public String toString() {
final StringBuilder sb = new StringBuilder("Inner{");
sb.append("pojoList=").append(pojoList);
sb.append('}');
return sb.toString();
}
}
class Pojo {
public Map<String, String> dataMap;
public int id;
#Override
public String toString() {
final StringBuilder sb = new StringBuilder("Pojo{");
sb.append("dataMap=").append(dataMap);
sb.append(", id=").append(id);
sb.append('}');
return sb.toString();
}
public Map<String, String> getDataMap() {
return dataMap;
}
public int getId() {
return id;
}
}
In my opinion Collections should never be null.
You could declare your pojoList and dataMap as private and instantiate them.
Your class then needs some add-methods. So you are sure getDataMap() never returns null:
class Pojo {
private Map<String, String> dataMap = new HashMap<>();
public Map<String, String> getDataMap() {
return dataMap;
}
public void add(String key, String value) {
dataMap.put(key, value);
}
}
Then you don't need to check for null:
Optional.ofNullable(testObj.inner)
.map(Inner::getPojoList)
.ifPresent(pojos -> pojos.forEach(pojo -> { pojo.getDataMap().clear(); }));
Netty 4.1 (on OpenJDK 1.6.0_32 and CentOS 6.4) message sending is strangely slow. According to the profiler, it is the DefaultChannelHandlerContext.writeAndFlush that makes the biggest percentage (60%) of the running time. Decoding process is not emphasized in the profiler. Small messages are being processed and maybe the bootstrap options are not set correctly (TCP_NODELAY is true and nothing improved)? DefaultEventExecutorGroup is used both in server and client to avoid blocking Netty's main event loop and to run 'ServerData' and 'ClientData' classes with business logic and sending of the messages is done from there through context.writeAndFlush(...). Is there a more proper/faster way? Using straight ByteBuf.writeBytes(..) serialization in the encoder and ReplayingDecoder in the decoder made no difference in encoding speed. Sorry for the lengthy code, neither 'Netty In Action' book nor the documentation helped.
JProfiler's call tree of the client side: http://i62.tinypic.com/dw4e43.jpg
The server class is:
public class NettyServer
{
EventLoopGroup incomingLoopGroup = null;
EventLoopGroup workerLoopGroup = null;
ServerBootstrap serverBootstrap = null;
int port;
DataServer dataServer = null;
DefaultEventExecutorGroup dataEventExecutorGroup = null;
DefaultEventExecutorGroup dataEventExecutorGroup2 = null;
public ChannelFuture serverChannelFuture = null;
public NettyServer(int port)
{
this.port = port;
DataServer = new DataServer(this);
}
public void run() throws Exception
{
incomingLoopGroup = new NioEventLoopGroup();
workerLoopGroup = new NioEventLoopGroup();
dataEventExecutorGroup = new DefaultEventExecutorGroup(5);
dataEventExecutorGroup2 = new DefaultEventExecutorGroup(5);
try
{
ChannelInitializer<SocketChannel> channelInitializer =
new ChannelInitializer<SocketChannel>()
{
#Override
protected void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new MessageByteDecoder());
ch.pipeline().addLast(new MessageByteEncoder());
ch.pipeline().addLast(dataEventExecutorGroup, new DataServerInboundHandler(DataServer, NettyServer.this));
ch.pipeline().addLast(dataEventExecutorGroup2, new DataServerDataHandler(DataServer));
}
};
// bootstrap the server
serverBootstrap = new ServerBootstrap();
serverBootstrap.group(incomingLoopGroup, workerLoopGroup)
.channel(NioServerSocketChannel.class)
.childHandler(channelInitializer)
.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.option(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 32 * 1024)
.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 8 * 1024)
.childOption(ChannelOption.SO_KEEPALIVE, true);
serverChannelFuture = serverBootstrap.bind(port).sync();
serverChannelFuture.channel().closeFuture().sync();
}
finally
{
incomingLoopGroup.shutdownGracefully();
workerLoopGroup.shutdownGracefully();
}
}
}
The client class:
public class NettyClient
{
Bootstrap clientBootstrap = null;
EventLoopGroup workerLoopGroup = null;
String serverHost = null;
int serverPort = -1;
ChannelFuture clientFutureChannel = null;
DataClient dataClient = null;
DefaultEventExecutorGroup dataEventExecutorGroup = new DefaultEventExecutorGroup(5);
DefaultEventExecutorGroup dataEventExecutorGroup2 = new DefaultEventExecutorGroup(5);
public NettyClient(String serverHost, int serverPort)
{
this.serverHost = serverHost;
this.serverPort = serverPort;
}
public void run() throws Exception
{
workerLoopGroup = new NioEventLoopGroup();
try
{
this.dataClient = new DataClient();
ChannelInitializer<SocketChannel> channelInitializer =
new ChannelInitializer<SocketChannel>()
{
#Override
protected void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new MessageByteDecoder());
ch.pipeline().addLast(new MessageByteEncoder());
ch.pipeline().addLast(dataEventExecutorGroup, new ClientInboundHandler(dataClient, NettyClient.this)); ch.pipeline().addLast(dataEventExecutorGroup2, new ClientDataHandler(dataClient));
}
};
clientBootstrap = new Bootstrap();
clientBootstrap.group(workerLoopGroup);
clientBootstrap.channel(NioSocketChannel.class);
clientBootstrap.option(ChannelOption.SO_KEEPALIVE, true);
clientBootstrap.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
clientBootstrap.option(ChannelOption.TCP_NODELAY, true);
clientBootstrap.option(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 32 * 1024);
clientBootstrap.option(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 8 * 1024);
clientBootstrap.handler(channelInitializer);
clientFutureChannel = clientBootstrap.connect(serverHost, serverPort).sync();
clientFutureChannel.channel().closeFuture().sync();
}
finally
{
workerLoopGroup.shutdownGracefully();
}
}
}
The message class:
public class Message implements Serializable
{
public static final byte MSG_FIELD = 0;
public static final byte MSG_HELLO = 1;
public static final byte MSG_LOG = 2;
public static final byte MSG_FIELD_RESPONSE = 3;
public static final byte MSG_MAP_KEY_VALUE = 4;
public static final byte MSG_STATS_FILE = 5;
public static final byte MSG_SHUTDOWN = 6;
public byte msgID;
public byte msgType;
public String key;
public String value;
public byte method;
public byte id;
}
The decoder:
public class MessageByteDecoder extends ByteToMessageDecoder
{
private Kryo kryoCodec = new Kryo();
private int contentSize = 0;
#Override
protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> out) //throws Exception
{
if (!buffer.isReadable() || buffer.readableBytes() < 4) // we need at least integer
return;
// read header
if (contentSize == 0) {
contentSize = buffer.readInt();
}
if (buffer.readableBytes() < contentSize)
return;
// read content
byte [] buf = new byte[contentSize];
buffer.readBytes(buf);
Input in = new Input(buf, 0, buf.length);
out.add(kryoCodec.readObject(in, Message.class));
contentSize = 0;
}
}
The encoder:
public class MessageByteEncoder extends MessageToByteEncoder<Message>
{
Kryo kryoCodec = new Kryo();
public MessageByteEncoder()
{
super(false);
}
#Override
protected void encode(ChannelHandlerContext ctx, Message msg, ByteBuf out) throws Exception
{
int offset = out.arrayOffset() + out.writerIndex();
byte [] inArray = out.array();
Output kryoOutput = new OutputWithOffset(inArray, inArray.length, offset + 4);
// serialize message content
kryoCodec.writeObject(kryoOutput, msg);
// write length of the message content at the beginning of the array
out.writeInt(kryoOutput.position());
out.writerIndex(out.writerIndex() + kryoOutput.position());
}
}
Client's business logic run in DefaultEventExecutorGroup:
public class DataClient
{
ChannelHandlerContext ctx;
// ...
public void processData()
{
// ...
while ((line = br.readLine()) != null)
{
// ...
process = new CountDownLatch(columns.size());
for(Column c : columns)
{
// sending column data to the server for processing
ctx.channel().eventLoop().execute(new Runnable() {
#Override
public void run() {
ctx.writeAndFlush(Message.createMessage(msgID, processID, c.key, c.value));
}});
}
// block until all the processed column fields of this row are returned from the server
process.await();
// write processed line to file ...
}
// ...
}
// ...
}
Client's message handling:
public class ClientInboundHandler extends ChannelInboundHandlerAdapter
{
DataClient dataClient = null;
// ...
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg)
{
// dispatch the message to the listeners
Message m = (Message) msg;
switch(m.msgType)
{
case Message.MSG_FIELD_RESPONSE: // message with processed data is received from the server
// decreases the 'process' CountDownLatch in the processData() method
dataClient.setProcessingResult(m.msgID, m.value);
break;
// ...
}
// forward the message to the pipeline
ctx.fireChannelRead(msg);
}
// ...
}
}
Server's message handling:
public class ServerInboundHandler extends ChannelInboundHandlerAdapter
{
private DataServer dataServer = null;
// ...
#Override
public void channelRead(ChannelHandlerContext ctx, Object obj) throws Exception
{
Message msg = (Message) obj;
switch(msg.msgType)
{
case Message.MSG_FIELD:
dataServer.processField(msg, ctx);
break;
// ...
}
ctx.fireChannelRead(msg);
}
//...
}
Server's business logic run in DefaultEventExecutorGroup:
public class DataServer
{
// ...
public void processField(final Message msg, final ChannelHandlerContext context)
{
context.executor().submit(new Runnable()
{
#Override
public void run()
{
String processedValue = (String) processField(msg.key, msg.value);
final Message responseToClient = Message.createResponseFieldMessage(msg.msgID, processedValue);
// send processed data to the client
context.channel().eventLoop().submit(new Runnable(){
#Override
public void run() {
context.writeAndFlush(responseToClient);
}
});
}
});
}
// ...
}
Please try using CentOS 7.0.
I've had similar problem:
The same Netty 4 program runs very fast on CentOS 7.0 (about 40k msg/s), but can't write more than about 8k msg/s on CentOS 6.3 and 6.5 (I haven't tried 6.4).
There is no need to submit stuff to the EventLoop. Just call Channel.writeAndFlush(...) directly in your DataClient and DataServer.
I have this hadoop map reduce code that works on graph data (in adjacency list form) and kind of similar to in-adjacency list to out-adjacency list transformation algorithms. The main MapReduce Task code is following:
public class TestTask extends Configured
implements Tool {
public static class TTMapper extends MapReduceBase
implements Mapper<Text, TextArrayWritable, Text, NeighborWritable> {
#Override
public void map(Text key,
TextArrayWritable value,
OutputCollector<Text, NeighborWritable> output,
Reporter reporter) throws IOException {
int numNeighbors = value.get().length;
double weight = (double)1 / numNeighbors;
Text[] neighbors = (Text[]) value.toArray();
NeighborWritable me = new NeighborWritable(key, new DoubleWritable(weight));
for (int i = 0; i < neighbors.length; i++) {
output.collect(neighbors[i], me);
}
}
}
public static class TTReducer extends MapReduceBase
implements Reducer<Text, NeighborWritable, Text, Text> {
#Override
public void reduce(Text key,
Iterator<NeighborWritable> values,
OutputCollector<Text, Text> output,
Reporter arg3)
throws IOException {
ArrayList<NeighborWritable> neighborList = new ArrayList<NeighborWritable>();
while(values.hasNext()) {
neighborList.add(values.next());
}
NeighborArrayWritable neighbors = new NeighborArrayWritable
(neighborList.toArray(new NeighborWritable[0]));
Text out = new Text(neighbors.toString());
output.collect(key, out);
}
}
#Override
public int run(String[] arg0) throws Exception {
JobConf conf = Util.getMapRedJobConf("testJob",
SequenceFileInputFormat.class,
TTMapper.class,
Text.class,
NeighborWritable.class,
1,
TTReducer.class,
Text.class,
Text.class,
TextOutputFormat.class,
"test/in",
"test/out");
JobClient.runJob(conf);
return 0;
}
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new TestTask(), args);
System.exit(res);
}
}
The auxiliary code is following:
TextArrayWritable:
public class TextArrayWritable extends ArrayWritable {
public TextArrayWritable() {
super(Text.class);
}
public TextArrayWritable(Text[] values) {
super(Text.class, values);
}
}
NeighborWritable:
public class NeighborWritable implements Writable {
private Text nodeId;
private DoubleWritable weight;
public NeighborWritable(Text nodeId, DoubleWritable weight) {
this.nodeId = nodeId;
this.weight = weight;
}
public NeighborWritable () { }
public Text getNodeId() {
return nodeId;
}
public DoubleWritable getWeight() {
return weight;
}
public void setNodeId(Text nodeId) {
this.nodeId = nodeId;
}
public void setWeight(DoubleWritable weight) {
this.weight = weight;
}
#Override
public void readFields(DataInput in) throws IOException {
nodeId = new Text();
nodeId.readFields(in);
weight = new DoubleWritable();
weight.readFields(in);
}
#Override
public void write(DataOutput out) throws IOException {
nodeId.write(out);
weight.write(out);
}
public String toString() {
return "NW[nodeId=" + (nodeId != null ? nodeId.toString() : "(null)") +
",weight=" + (weight != null ? weight.toString() : "(null)") + "]";
}
public boolean equals(Object o) {
if (!(o instanceof NeighborWritable)) {
return false;
}
NeighborWritable that = (NeighborWritable)o;
return (nodeId.equals(that.getNodeId()) && (weight.equals(that.getWeight())));
}
}
and the Util class:
public class Util {
public static JobConf getMapRedJobConf(String jobName,
Class<? extends InputFormat> inputFormatClass,
Class<? extends Mapper> mapperClass,
Class<?> mapOutputKeyClass,
Class<?> mapOutputValueClass,
int numReducer,
Class<? extends Reducer> reducerClass,
Class<?> outputKeyClass,
Class<?> outputValueClass,
Class<? extends OutputFormat> outputFormatClass,
String inputDir,
String outputDir) throws IOException {
JobConf conf = new JobConf();
if (jobName != null)
conf.setJobName(jobName);
conf.setInputFormat(inputFormatClass);
conf.setMapperClass(mapperClass);
if (numReducer == 0) {
conf.setNumReduceTasks(0);
conf.setOutputKeyClass(outputKeyClass);
conf.setOutputValueClass(outputValueClass);
conf.setOutputFormat(outputFormatClass);
} else {
// may set actual number of reducers
// conf.setNumReduceTasks(numReducer);
conf.setMapOutputKeyClass(mapOutputKeyClass);
conf.setMapOutputValueClass(mapOutputValueClass);
conf.setReducerClass(reducerClass);
conf.setOutputKeyClass(outputKeyClass);
conf.setOutputValueClass(outputValueClass);
conf.setOutputFormat(outputFormatClass);
}
// delete the existing target output folder
FileSystem fs = FileSystem.get(conf);
fs.delete(new Path(outputDir), true);
// specify input and output DIRECTORIES (not files)
FileInputFormat.addInputPath(conf, new Path(inputDir));
FileOutputFormat.setOutputPath(conf, new Path(outputDir));
return conf;
}
}
My input is following graph: (in binary format, here I am giving the text format)
1 2
2 1,3,5
3 2,4
4 3,5
5 2,4
According to the logic of the code the output should be:
1 NWArray[size=1,{NW[nodeId=2,weight=0.3333333333333333],}]
2 NWArray[size=3,{NW[nodeId=5,weight=0.5],NW[nodeId=3,weight=0.5],NW[nodeId=1,weight=1.0],}]
3 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=4,weight=0.5],}]
4 NWArray[size=2,{NW[nodeId=5,weight=0.5],NW[nodeId=3,weight=0.5],}]
5 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=4,weight=0.5],}]
But the output is coming as:
1 NWArray[size=1,{NW[nodeId=2,weight=0.3333333333333333],}]
2 NWArray[size=3,{NW[nodeId=5,weight=0.5],NW[nodeId=5,weight=0.5],NW[nodeId=5,weight=0.5],}]
3 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=2,weight=0.3333333333333333],}]
4 NWArray[size=2,{NW[nodeId=5,weight=0.5],NW[nodeId=5,weight=0.5],}]
5 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=2,weight=0.3333333333333333],}]
I cannot understand the reason why the expected output is not coming out. Any help will be appreciated.
Thanks.
You're falling foul of object re-use
while(values.hasNext()) {
neighborList.add(values.next());
}
values.next() will return the same object reference, but the underlying contents of that object will change for each iteration (the readFields method is called to re-populate the contents)
Suggest you amend to (you'll need to obtain the Configuration conf variable from a setup method, unless you can obtain it from the Reporter or OutputCollector - sorry i don't use the old API)
while(values.hasNext()) {
neighborList.add(
ReflectionUtils.copy(conf, values.next(), new NeighborWritable());
}
But I still can't understand why my unit test passed then. Here is the code -
public class UWLTInitReducerTest {
private Text key;
private Iterator<NeighborWritable> values;
private NeighborArrayWritable nodeData;
private TTReducer reducer;
/**
* Set up the states for calling the map function
*/
#Before
public void setUp() throws Exception {
key = new Text("1001");
NeighborWritable[] neighbors = new NeighborWritable[4];
for (int i = 0; i < 4; i++) {
neighbors[i] = new NeighborWritable(new Text("300" + i), new DoubleWritable((double) 1 / (1 + i)));
}
values = Arrays.asList(neighbors).iterator();
nodeData = new NeighborArrayWritable(neighbors);
reducer = new TTReducer();
}
/**
* Test method for InitModelMapper#map - valid input
*/
#Test
public void testMapValid() {
// mock the output object
OutputCollector<Text, UWLTNodeData> output = mock(OutputCollector.class);
try {
// call the API
reducer.reduce(key, values, output, null);
// in order (sequential) verification of the calls to output.collect()
verify(output).collect(key, nodeData);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Why didn't this code catch the bug?