Storm Trident 'average aggregator - apache-storm

I am a newbie to Trident and I'm looking to create an 'Average' aggregator similar to 'Sum(), but for 'Average'.The following does not work:
public class Average implements CombinerAggregator<Long>.......{
public Long init(TridentTuple tuple)
{
(Long)tuple.getValue(0);
}
public Long Combine(long val1,long val2){
return val1+val2/2;
}
public Long zero(){
return 0L;
}
}
It may not be exactly syntactically correct, but that's the idea. Please help if you can. Given 2 tuples with values [2,4,1] and [2,2,5] and fields 'a','b' and 'c' and doing an average on field 'b' should return '3'. I'm not entirely sure how init() and zero() work.
Thank you so much for your help in advance.
Eli

public class Average implements CombinerAggregator<Number> {
int count = 0;
double sum = 0;
#Override
public Double init(final TridentTuple tuple) {
this.count++;
if (!(tuple.getValue(0) instanceof Double)) {
double d = ((Number) tuple.getValue(0)).doubleValue();
this.sum += d;
return d;
}
this.sum += (Double) tuple.getValue(0);
return (Double) tuple.getValue(0);
}
#Override
public Double combine(final Number val1, final Number val2) {
return this.sum / this.count;
}
#Override
public Double zero() {
this.sum = 0;
this.count = 0;
return 0D;
}
}

I am a complete newbie when it comes to Trident as well, and so I'm not entirely if the following will work. But it might:
public class AvgAgg extends BaseAggregator<AvgState> {
static class AvgState {
long count = 0;
long total = 0;
double getAverage() {
return total/count;
}
}
public AvgState init(Object batchId, TridentCollector collector) {
return new AvgState();
}
public void aggregate(AvgState state, TridentTuple tuple, TridentCollector collector) {
state.count++;
state.total++;
}
public void complete(AvgState state, TridentCollector collector) {
collector.emit(new Values(state.getAverage()));
}
}

Related

Recycler View with Header and Edit Text

I have a recyclerview with a header achieved by using two different element types. In my header there is an edit text which I want to use for filtering the nonheader elements of the list. Below is my current implementation, I have one concern and one problem with it.
My concern is that what I am doing in publishResults with the notifyItemRangeRemoved and notifyItemInserted is the wrong way to update the recycler view. I originally was doing notifyDatasetChanged but his would cause the header row to be refreshed too and the edit text to lose focus. What I really want is a way to refresh only the item rows and leave the header row untouched.
My current problem is that with the existing code if I scroll down too much the edit text looses focus. I want the edit text to keep focus even if I scroll to the bottom of the list.
The code used to use a ListView with setHeaderView and that worked somehow so there must be someway of achieving the goal just not sure what the trick with a recycler view is. Any help is much appreciated.
public class SideListAdapter extends RecyclerView.Adapter<RecyclerView.ViewHolder> implements Filterable {
private static final int TYPE_HEADER = 0;
private static final int TYPE_ITEM = 1;
private final List<String> data;
public List<String> filteredData;
private HeaderActionListener headerActionListener;
public SideListAdapter(Context context, ArrayList<String> data, HeaderActionListener headerActionListener) {
this.data = data;
filteredData = new ArrayList<>(data);
this.context = context;
this.headerActionListener = headerActionListener;
}
#Override
public Filter getFilter() {
return new TestFilter();
}
static class SideListItem extends RecyclerView.ViewHolder {
LinearLayout baseLayout;
public SideListItem(View itemView) {
super(itemView);
baseLayout = (LinearLayout) itemView.findViewById(R.id.settings_defaultcolor);
}
}
class SideListHeader extends SideListHeader {
EditText sort;
public SideListHeaderLoggedIn(View itemView) {
super(itemView);
sort = (EditText) itemView.findViewById(R.id.sort);
}
}
#Override
public RecyclerView.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
if (viewType == TYPE_ITEM) {
View v = LayoutInflater.from(parent.getContext()).inflate(R.layout.list_item, parent, false);
return new SideListItem(v);
} else if (viewType == SideListHeader) {
View v = LayoutInflater.from(parent.getContext()).inflate(R.layout.header, parent, false);
return new SideListHeader(v);
}
throw new RuntimeException("there is no type that matches the type " + viewType + " + make sure your using types correctly");
}
public interface HeaderActionListener {
boolean onSortEditorAction(TextView arg0, int arg1, KeyEvent arg2);
}
#Override
public void onBindViewHolder(RecyclerView.ViewHolder holder, final int position) {
if (holder instanceof SideListHeader) {
final SideListHeader sideListHeader = (SideListHeader) holder;
sideListHeader.sort.setOnEditorActionListener(new TextView.OnEditorActionListener() {
#Override
public boolean onEditorAction(TextView v, int actionId, KeyEvent event) {
}
});
sideListHeader.sort.addTextChangedListener(new TextWatcher() {
#Override
public void beforeTextChanged(CharSequence charSequence, int i, int i2, int i3) {
}
#Override
public void onTextChanged(CharSequence charSequence, int i, int i2, int i3) {
}
#Override
public void afterTextChanged(Editable editable) {
String result = sideListHeader.sort.getText().toString().replaceAll(" ", "");
getFilter().filter(result);
}
});
}
if (holder instanceof SideListItem) {
// Inflate normal item //
}
}
// need to override this method
#Override
public int getItemViewType(int position) {
if (isPositionHeader(position)) {
return TYPE_HEADER;
}
return TYPE_ITEM;
}
private boolean isPositionHeader(int position) {
return position == 0;
}
//increasing getItemcount to 1. This will be the row of header.
#Override
public int getItemCount() {
return filteredData.size() + 1;
}
private class TestFilter extends Filter {
#Override
protected FilterResults performFiltering(CharSequence constraint) {
FilterResults results = new FilterResults();
String prefix = constraint.toString().toLowerCase();
if (prefix.isEmpty()) {
ArrayList<String> list = new ArrayList<>(data);
results.values = list;
results.count = list.size();
} else {
final ArrayList<String> list = new ArrayList<>(data);
final ArrayList<String> nlist = new ArrayList<>();
for (int i = 0 ; i < list.size(); i++) {
String item = list.get(i);
if (item.contains(prefix)) {
nlist.add(item);
}
}
results.values = nlist;
results.count = nlist.size();
}
return results;
}
#SuppressWarnings("unchecked")
#Override
protected void publishResults(CharSequence constraint, FilterResults results) {
notifyItemRangeRemoved(1, getItemCount()-1);
filteredData.clear();
filteredData.addAll((List<String>)results.values);
for(int i = 1; i < getItemCount() - 1; i++){
notifyItemInserted(i);
}
}
}
}
I'm not sure how correct this way is, but in my code I implemented it like that
private var headerList: List<HeaderItem> = listOf(HeaderItem("Title"))
private fun searchItem(items: List<Items>, query: String) {
items.filterIsInstance<MainItem>().filter { filteredItems ->
filteredItems.header.lowercase().contains(query.lowercase())
}.let { searchedItems ->
rvAdapter.submitList(headerList + searchedItems)
}
}
This way I was able to preserve header element when I did my search

Map-Reduce not reducing as much as expected with complex keys and values

No matter how simple I make the compareTo of my complex key, I don't get expected results. With the exception of if I use one key that is the same for every record, it will appropriately reduce to one record. I've also witnessed that this happens only when I process the full load, if I break off a few of the records that didn't reduce and run it on a much smaller scale those records get combined.
The sum of the output records is correct, but there is duplication at the record level of items I would have expected to group together. So where I would expect say 500 records summing up to 5,000, I end up with 1232 records summing up to 5,000 with obvious records that should have been reduced into one.
I've read about the problems with object references and complex keys and values, but I don't see anywhere that I have potential for that left. To that end you will find places that I'm creating new objects that I probably don't need to, but I'm trying everything at this point and will dial it back once it is working.
I'm out of ideas on what to try or where and how to poke to figure this out. Please help!
public static class Map extends
Mapper<LongWritable, Text, IMSTranOut, IMSTranSums> {
//private SimpleDateFormat dtFormat = new SimpleDateFormat("yyyyddd");
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
SimpleDateFormat dtFormat = new SimpleDateFormat("yyyyddd");
IMSTranOut dbKey = new IMSTranOut();
IMSTranSums sumVals = new IMSTranSums();
String[] tokens = line.split(",", -1);
dbKey.setLoadKey(-99);
dbKey.setTranClassKey(-99);
dbKey.setTransactionCode(tokens[0]);
dbKey.setTransactionType(tokens[1]);
dbKey.setNpaNxx(getNPA(dbKey.getTransactionCode()));
try {
dbKey.setTranDate(new Date(dtFormat.parse(tokens[2]).getTime()));
} catch (ParseException e) {
}// 2
dbKey.setTranHour(getTranHour(tokens[3]));
try {
dbKey.setStartDate(new Date(dtFormat.parse(tokens[4]).getTime()));
} catch (ParseException e) {
}// 4
dbKey.setStartHour(getTranHour(tokens[5]));
try {
dbKey.setStopDate(new Date(dtFormat.parse(tokens[6]).getTime()));
} catch (ParseException e) {
}// 6
dbKey.setStopHour(getTranHour(tokens[7]));
sumVals.setTranCount(1);
sumVals.setInputQTime(Double.parseDouble(tokens[8]));
sumVals.setElapsedTime(Double.parseDouble(tokens[9]));
sumVals.setCpuTime(Double.parseDouble(tokens[10]));
context.write(dbKey, sumVals);
}
}
public static class Reduce extends
Reducer<IMSTranOut, IMSTranSums, IMSTranOut, IMSTranSums> {
#Override
public void reduce(IMSTranOut key, Iterable<IMSTranSums> values,
Context context) throws IOException, InterruptedException {
int tranCount = 0;
double inputQ = 0;
double elapsed = 0;
double cpu = 0;
for (IMSTranSums val : values) {
tranCount += val.getTranCount();
inputQ += val.getInputQTime();
elapsed += val.getElapsedTime();
cpu += val.getCpuTime();
}
IMSTranSums sumVals=new IMSTranSums();
IMSTranOut dbKey=new IMSTranOut();
sumVals.setCpuTime(inputQ);
sumVals.setElapsedTime(elapsed);
sumVals.setInputQTime(cpu);
sumVals.setTranCount(tranCount);
dbKey.setLoadKey(key.getLoadKey());
dbKey.setTranClassKey(key.getTranClassKey());
dbKey.setNpaNxx(key.getNpaNxx());
dbKey.setTransactionCode(key.getTransactionCode());
dbKey.setTransactionType(key.getTransactionType());
dbKey.setTranDate(key.getTranDate());
dbKey.setTranHour(key.getTranHour());
dbKey.setStartDate(key.getStartDate());
dbKey.setStartHour(key.getStartHour());
dbKey.setStopDate(key.getStopDate());
dbKey.setStopHour(key.getStopHour());
dbKey.setInputQTime(inputQ);
dbKey.setElapsedTime(elapsed);
dbKey.setCpuTime(cpu);
dbKey.setTranCount(tranCount);
context.write(dbKey, sumVals);
}
}
Here is the implementation of the DBWritable class:
public class IMSTranOut implements DBWritable,
WritableComparable<IMSTranOut> {
private int loadKey;
private int tranClassKey;
private String npaNxx;
private String transactionCode;
private String transactionType;
private Date tranDate;
private double tranHour;
private Date startDate;
private double startHour;
private Date stopDate;
private double stopHour;
private double inputQTime;
private double elapsedTime;
private double cpuTime;
private int tranCount;
public void readFields(ResultSet rs) throws SQLException {
setLoadKey(rs.getInt("LOAD_KEY"));
setTranClassKey(rs.getInt("TRAN_CLASS_KEY"));
setNpaNxx(rs.getString("NPA_NXX"));
setTransactionCode(rs.getString("TRANSACTION_CODE"));
setTransactionType(rs.getString("TRANSACTION_TYPE"));
setTranDate(rs.getDate("TRAN_DATE"));
setTranHour(rs.getInt("TRAN_HOUR"));
setStartDate(rs.getDate("START_DATE"));
setStartHour(rs.getInt("START_HOUR"));
setStopDate(rs.getDate("STOP_DATE"));
setStopHour(rs.getInt("STOP_HOUR"));
setInputQTime(rs.getInt("INPUT_Q_TIME"));
setElapsedTime(rs.getInt("ELAPSED_TIME"));
setCpuTime(rs.getInt("CPU_TIME"));
setTranCount(rs.getInt("TRAN_COUNT"));
}
public void write(PreparedStatement ps) throws SQLException {
ps.setInt(1, loadKey);
ps.setInt(2, tranClassKey);
ps.setString(3, npaNxx);
ps.setString(4, transactionCode);
ps.setString(5, transactionType);
ps.setDate(6, tranDate);
ps.setDouble(7, tranHour);
ps.setDate(8, startDate);
ps.setDouble(9, startHour);
ps.setDate(10, stopDate);
ps.setDouble(11, stopHour);
ps.setDouble(12, inputQTime);
ps.setDouble(13, elapsedTime);
ps.setDouble(14, cpuTime);
ps.setInt(15, tranCount);
}
public int getLoadKey() {
return loadKey;
}
public void setLoadKey(int loadKey) {
this.loadKey = loadKey;
}
public int getTranClassKey() {
return tranClassKey;
}
public void setTranClassKey(int tranClassKey) {
this.tranClassKey = tranClassKey;
}
public String getNpaNxx() {
return npaNxx;
}
public void setNpaNxx(String npaNxx) {
this.npaNxx = new String(npaNxx);
}
public String getTransactionCode() {
return transactionCode;
}
public void setTransactionCode(String transactionCode) {
this.transactionCode = new String(transactionCode);
}
public String getTransactionType() {
return transactionType;
}
public void setTransactionType(String transactionType) {
this.transactionType = new String(transactionType);
}
public Date getTranDate() {
return tranDate;
}
public void setTranDate(Date tranDate) {
this.tranDate = new Date(tranDate.getTime());
}
public double getTranHour() {
return tranHour;
}
public void setTranHour(double tranHour) {
this.tranHour = tranHour;
}
public Date getStartDate() {
return startDate;
}
public void setStartDate(Date startDate) {
this.startDate = new Date(startDate.getTime());
}
public double getStartHour() {
return startHour;
}
public void setStartHour(double startHour) {
this.startHour = startHour;
}
public Date getStopDate() {
return stopDate;
}
public void setStopDate(Date stopDate) {
this.stopDate = new Date(stopDate.getTime());
}
public double getStopHour() {
return stopHour;
}
public void setStopHour(double stopHour) {
this.stopHour = stopHour;
}
public double getInputQTime() {
return inputQTime;
}
public void setInputQTime(double inputQTime) {
this.inputQTime = inputQTime;
}
public double getElapsedTime() {
return elapsedTime;
}
public void setElapsedTime(double elapsedTime) {
this.elapsedTime = elapsedTime;
}
public double getCpuTime() {
return cpuTime;
}
public void setCpuTime(double cpuTime) {
this.cpuTime = cpuTime;
}
public int getTranCount() {
return tranCount;
}
public void setTranCount(int tranCount) {
this.tranCount = tranCount;
}
public void readFields(DataInput input) throws IOException {
setNpaNxx(input.readUTF());
setTransactionCode(input.readUTF());
setTransactionType(input.readUTF());
setTranDate(new Date(input.readLong()));
setStartDate(new Date(input.readLong()));
setStopDate(new Date(input.readLong()));
setLoadKey(input.readInt());
setTranClassKey(input.readInt());
setTranHour(input.readDouble());
setStartHour(input.readDouble());
setStopHour(input.readDouble());
setInputQTime(input.readDouble());
setElapsedTime(input.readDouble());
setCpuTime(input.readDouble());
setTranCount(input.readInt());
}
public void write(DataOutput output) throws IOException {
output.writeUTF(npaNxx);
output.writeUTF(transactionCode);
output.writeUTF(transactionType);
output.writeLong(tranDate.getTime());
output.writeLong(startDate.getTime());
output.writeLong(stopDate.getTime());
output.writeInt(loadKey);
output.writeInt(tranClassKey);
output.writeDouble(tranHour);
output.writeDouble(startHour);
output.writeDouble(stopHour);
output.writeDouble(inputQTime);
output.writeDouble(elapsedTime);
output.writeDouble(cpuTime);
output.writeInt(tranCount);
}
public int compareTo(IMSTranOut o) {
return (Integer.compare(loadKey, o.getLoadKey()) == 0
&& Integer.compare(tranClassKey, o.getTranClassKey()) == 0
&& npaNxx.compareTo(o.getNpaNxx()) == 0
&& transactionCode.compareTo(o.getTransactionCode()) == 0
&& (transactionType.compareTo(o.getTransactionType()) == 0)
&& tranDate.compareTo(o.getTranDate()) == 0
&& Double.compare(tranHour, o.getTranHour()) == 0
&& startDate.compareTo(o.getStartDate()) == 0
&& Double.compare(startHour, o.getStartHour()) == 0
&& stopDate.compareTo(o.getStopDate()) == 0
&& Double.compare(stopHour, o.getStopHour()) == 0) ? 0 : 1;
}
}
Implementation of the Writable class for the complex values:
public class IMSTranSums
implements Writable{
private double inputQTime;
private double elapsedTime;
private double cpuTime;
private int tranCount;
public double getInputQTime() {
return inputQTime;
}
public void setInputQTime(double inputQTime) {
this.inputQTime = inputQTime;
}
public double getElapsedTime() {
return elapsedTime;
}
public void setElapsedTime(double elapsedTime) {
this.elapsedTime = elapsedTime;
}
public double getCpuTime() {
return cpuTime;
}
public void setCpuTime(double cpuTime) {
this.cpuTime = cpuTime;
}
public int getTranCount() {
return tranCount;
}
public void setTranCount(int tranCount) {
this.tranCount = tranCount;
}
public void write(DataOutput output) throws IOException {
output.writeDouble(inputQTime);
output.writeDouble(elapsedTime);
output.writeDouble(cpuTime);
output.writeInt(tranCount);
}
public void readFields(DataInput input) throws IOException {
inputQTime=input.readDouble();
elapsedTime=input.readDouble();
cpuTime=input.readDouble();
tranCount=input.readInt();
}
}
Your compareTo is flawed, it will totally fail the sort algorithm, because you seem to break transivity in your ordering.
I would recommend you to use a CompareToBuilder from Apache Commons or a ComparisonChain from Guava to make your comparisons much more readable (and correct!).

Why different result each run?

I'm playing around with CompletableFuture and streams in Java 8 and I get different printouts each time I run this. Just curious, why?
public class DoIt {
public static class My {
private long cur = 0;
public long next() {
return cur++;
}
}
public static long add() {
long sum = 0;
for (long i=0; i<=100;i++) {
sum += i;
}
return sum;
}
public static long getResult(CompletableFuture<Long> f) {
long l = 0;
try {
f.complete(42l);
l = f.get();
System.out.println(l);
} catch (Exception e) {
//...
}
return l;
}
public static void main(String[] args){
ExecutorService exec = Executors.newFixedThreadPool(2);
My my = new My();
long sum = Stream.generate(my::next).limit(20000).
map(x -> CompletableFuture.supplyAsync(()-> add(), exec)).
mapToLong(f->getResult(f)).sum();
System.out.println(sum);
exec.shutdown();
}
}
If I skip the f.complete(42l) call I always get the same result.
http://download.java.net/jdk8/docs/api/java/util/concurrent/CompletableFuture.html#complete-T-
by the time the complete(42l) call happens, some add()'s may have already completed.

Hadoop Raw comparator

I am trying to implement the following in a Raw Comparator but not sure how to write this?
the tumestamp field here is of tyoe LongWritable.
if (this.getNaturalKey().compareTo(o.getNaturalKey()) != 0) {
return this.getNaturalKey().compareTo(o.getNaturalKey());
} else if (this.timeStamp != o.timeStamp) {
return timeStamp.compareTo(o.timeStamp);
} else {
return 0;
}
I found a hint here, but not sure how do I implement this dealing with a LongWritabel type?
http://my.safaribooksonline.com/book/databases/hadoop/9780596521974/serialization/id3548156
Thanks for your help
Let say i have a CompositeKey that represents a pair of (String stockSymbol, long timestamp).
We can do a primary grouping pass on the stockSymbol field to get all of the data of one type together, and then our "secondary sort" during the shuffle phase uses the timestamp long member to sort the timeseries points so that they arrive at the reducer partitioned and in sorted order.
public class CompositeKey implements WritableComparable<CompositeKey> {
// natural key is (stockSymbol)
// composite key is a pair (stockSymbol, timestamp)
private String stockSymbol;
private long timestamp;
......//Getter setter omiited for clarity here
#Override
public void readFields(DataInput in) throws IOException {
this.stockSymbol = in.readUTF();
this.timestamp = in.readLong();
}
#Override
public void write(DataOutput out) throws IOException {
out.writeUTF(this.stockSymbol);
out.writeLong(this.timestamp);
}
#Override
public int compareTo(CompositeKey other) {
if (this.stockSymbol.compareTo(other.stockSymbol) != 0) {
return this.stockSymbol.compareTo(other.stockSymbol);
}
else if (this.timestamp != other.timestamp) {
return timestamp < other.timestamp ? -1 : 1;
}
else {
return 0;
}
}
Now the CompositeKey comparator would be:
public class CompositeKeyComparator extends WritableComparator {
protected CompositeKeyComparator() {
super(CompositeKey.class, true);
}
#Override
public int compare(WritableComparable wc1, WritableComparable wc2) {
CompositeKey ck1 = (CompositeKey) wc1;
CompositeKey ck2 = (CompositeKey) wc2;
int comparison = ck1.getStockSymbol().compareTo(ck2.getStockSymbol());
if (comparison == 0) {
// stock symbols are equal here
if (ck1.getTimestamp() == ck2.getTimestamp()) {
return 0;
}
else if (ck1.getTimestamp() < ck2.getTimestamp()) {
return -1;
}
else {
return 1;
}
}
else {
return comparison;
}
}
}
Are you asking about way to compare LongWritable type provided by hadoop ?
If yes, then the answer is to use compare() method. For more details, scroll down here.
The best way to correctly implement RawComparator is to extend WritableComparator and override compare() method. The WritableComparator is very good written, so you can easily understand it.
It is already implemented from what I see in the LongWritable class:
/** A Comparator optimized for LongWritable. */
public static class Comparator extends WritableComparator {
public Comparator() {
super(LongWritable.class);
}
public int compare(byte[] b1, int s1, int l1,
byte[] b2, int s2, int l2) {
long thisValue = readLong(b1, s1);
long thatValue = readLong(b2, s2);
return (thisValue<thatValue ? -1 : (thisValue==thatValue ? 0 : 1));
}
}
That byte comparision is the override of the RawComparator.

Using a custom Object as key emitted by mapper

I have a situation in which mapper emits as key an object of custom type.
It has two fields an intWritable ID, and a data array IntArrayWritable.
The implementation is as follows.
`
import java.io.*;
import org.apache.hadoop.io.*;
public class PairDocIdPerm implements WritableComparable<PairDocIdPerm> {
public PairDocIdPerm(){
this.permId = new IntWritable(-1);
this.SignaturePerm = new IntArrayWritable();
}
public IntWritable getPermId() {
return permId;
}
public void setPermId(IntWritable permId) {
this.permId = permId;
}
public IntArrayWritable getSignaturePerm() {
return SignaturePerm;
}
public void setSignaturePerm(IntArrayWritable signaturePerm) {
SignaturePerm = signaturePerm;
}
private IntWritable permId;
private IntArrayWritable SignaturePerm;
public PairDocIdPerm(IntWritable permId,IntArrayWritable SignaturePerm) {
this.permId = permId;
this.SignaturePerm = SignaturePerm;
}
#Override
public void write(DataOutput out) throws IOException {
permId.write(out);
SignaturePerm.write(out);
}
#Override
public void readFields(DataInput in) throws IOException {
permId.readFields(in);
SignaturePerm.readFields(in);
}
#Override
public int hashCode() { // same permId must go to same reducer. there fore just permId
return permId.get();//.hashCode();
}
#Override
public boolean equals(Object o) {
if (o instanceof PairDocIdPerm) {
PairDocIdPerm tp = (PairDocIdPerm) o;
return permId.equals(tp.permId) && SignaturePerm.equals(tp.SignaturePerm);
}
return false;
}
#Override
public String toString() {
return permId + "\t" +SignaturePerm.toString();
}
#Override
public int compareTo(PairDocIdPerm tp) {
int cmp = permId.compareTo(tp.permId);
Writable[] ar, other;
ar = this.SignaturePerm.get();
other = tp.SignaturePerm.get();
if (cmp == 0) {
for(int i=0;i<ar.length;i++){
if(((IntWritable)ar[i]).get() == ((IntWritable)other[i]).get()){cmp= 0;continue;}
else if(((IntWritable)ar[i]).get() < ((IntWritable)other[i]).get()){ return -1;}
else if(((IntWritable)ar[i]).get() > ((IntWritable)other[i]).get()){return 1;}
}
}
return cmp;
//return 1;
}
}`
I require the keys with same Id to go to the same reducer with their sort order as coded in the compareTo method.
However when i use this, my job execution status is always map100% reduce 0%.
The reduce never runs to completion. Is there any thing wrong in this implementation?
In general what is the likely problem if reducer status is always 0%.
I think this might be a possible null pointer exception in the read method:
#Override
public void readFields(DataInput in) throws IOException {
permId.readFields(in);
SignaturePerm.readFields(in);
}
permId is null in this case.
So what you have to do is this:
IntWritable permId = new IntWritable();
Either in the field initializer or before the read.
However, your code is horrible to read.

Resources