I am reading values from .xlsx using spring batch excel and POI. I see numeric values are printing with different format than the original value in .xlsx
Please suggest me , How to print the values as its in .xlsx file. Below are the details.
In my Excel values are as follows
The values are printing as below
My code is as below
public ItemReader<DataObject> fileItemReader(InputStream inputStream){
PoiItemReader<DataObject> reader = new PoiItemReader<DataObject>();
reader.setLinesToSkip(1);
reader.setResource(new InputStreamResource(DataObject));
reader.setRowMapper(excelRowMapper());
reader.open(new ExecutionContext());
return reader;
}
private RowMapper<DataObject> excelRowMapper() {
return new MyRowMapper();
}
public class MyRowMapper implements RowMapper<DataObject> {
#Override
public DataRecord mapRow(RowSet rowSet) throws Exception {
DataObject dataObj = new DataObject();
dataObj.setFieldOne(rowSet.getColumnValue(0));
dataObj.setFieldTwo(rowSet.getColumnValue(1));
dataObj.setFieldThree(rowSet.getColumnValue(2));
dataObj.setFieldFour(rowSet.getColumnValue(3));
return dataObj;
}
}
I had this same problem, and its root is the class org.springframework.batch.item.excel.poi.PoiSheet inside PoiItemReader.
The problem happens in the method public String[] getRow(final int rowNumber) where it gets a org.apache.poi.ss.usermodel.Row object and convert it to an array of Strings after detecting the type of each column in the row. In this method, we have the code:
switch (cellType) {
case NUMERIC:
if (DateUtil.isCellDateFormatted(cell)) {
Date date = cell.getDateCellValue();
cells.add(String.valueOf(date.getTime()));
} else {
cells.add(String.valueOf(cell.getNumericCellValue()));
}
break;
case BOOLEAN:
cells.add(String.valueOf(cell.getBooleanCellValue()));
break;
case STRING:
case BLANK:
cells.add(cell.getStringCellValue());
break;
case ERROR:
cells.add(FormulaError.forInt(cell.getErrorCellValue()).getString());
break;
default:
throw new IllegalArgumentException("Cannot handle cells of type '" + cell.getCellTypeEnum() + "'");
}
In which the treatment for a cell identified as NUMERIC is cells.add(String.valueOf(cell.getNumericCellValue())). In this line, the cell value is converted to double (cell.getNumericCellValue()) and this double is converted to String (String.valueOf()). The problem happens in the String.valueOf() method, that will generate scientific notation if the number is too big (>=10000000) or too small(<0.001) and will put the ".0" on integer values.
As an alternative to the line cells.add(String.valueOf(cell.getNumericCellValue())), you could use
DataFormatter formatter = new DataFormatter();
cells.add(formatter.formatCellValue(cell));
that will return to you the exact values of the cells as a String. However, this also mean that your decimal numbers will be locale dependent (you'll receive the string "2.5" from a document saved on an Excel configured for UK or India and the string "2,5" from France or Brazil).
To avoid this dependency, we can use the solution presented on https://stackoverflow.com/a/25307973/9184574:
DecimalFormat df = new DecimalFormat("0", DecimalFormatSymbols.getInstance(Locale.ENGLISH));
df.setMaximumFractionDigits(340);
cells.add(df.format(cell.getNumericCellValue()));
That will convert the cell to double and than format it to the English pattern without scientific notation or adding ".0" to integers.
My implementation of the CustomPoiSheet (small adaptation on original PoiSheet) was:
class CustomPoiSheet implements Sheet {
protected final org.apache.poi.ss.usermodel.Sheet delegate;
private final int numberOfRows;
private final String name;
private FormulaEvaluator evaluator;
/**
* Constructor which takes the delegate sheet.
*
* #param delegate the apache POI sheet
*/
CustomPoiSheet(final org.apache.poi.ss.usermodel.Sheet delegate) {
super();
this.delegate = delegate;
this.numberOfRows = this.delegate.getLastRowNum() + 1;
this.name=this.delegate.getSheetName();
}
/**
* {#inheritDoc}
*/
#Override
public int getNumberOfRows() {
return this.numberOfRows;
}
/**
* {#inheritDoc}
*/
#Override
public String getName() {
return this.name;
}
/**
* {#inheritDoc}
*/
#Override
public String[] getRow(final int rowNumber) {
final Row row = this.delegate.getRow(rowNumber);
if (row == null) {
return null;
}
final List<String> cells = new LinkedList<>();
final int numberOfColumns = row.getLastCellNum();
for (int i = 0; i < numberOfColumns; i++) {
Cell cell = row.getCell(i);
CellType cellType = cell.getCellType();
if (cellType == CellType.FORMULA) {
FormulaEvaluator evaluator = getFormulaEvaluator();
if (evaluator == null) {
cells.add(cell.getCellFormula());
} else {
cellType = evaluator.evaluateFormulaCell(cell);
}
}
switch (cellType) {
case NUMERIC:
if (DateUtil.isCellDateFormatted(cell)) {
Date date = cell.getDateCellValue();
cells.add(String.valueOf(date.getTime()));
} else {
// Returns numeric value the closer possible to it's value and shown string, only formatting to english format
// It will result in an integer string (without decimal places) if the value is a integer, and will result
// on the double string without trailing zeros. It also suppress scientific notation
// Regards to https://stackoverflow.com/a/25307973/9184574
DecimalFormat df = new DecimalFormat("0", DecimalFormatSymbols.getInstance(Locale.ENGLISH));
df.setMaximumFractionDigits(340);
cells.add(df.format(cell.getNumericCellValue()));
//DataFormatter formatter = new DataFormatter();
//cells.add(formatter.formatCellValue(cell));
//cells.add(String.valueOf(cell.getNumericCellValue()));
}
break;
case BOOLEAN:
cells.add(String.valueOf(cell.getBooleanCellValue()));
break;
case STRING:
case BLANK:
cells.add(cell.getStringCellValue());
break;
case ERROR:
cells.add(FormulaError.forInt(cell.getErrorCellValue()).getString());
break;
default:
throw new IllegalArgumentException("Cannot handle cells of type '" + cell.getCellTypeEnum() + "'");
}
}
return cells.toArray(new String[0]);
}
private FormulaEvaluator getFormulaEvaluator() {
if (this.evaluator == null) {
this.evaluator = delegate.getWorkbook().getCreationHelper().createFormulaEvaluator();
}
return this.evaluator;
}
}
And my implementation of CustomPoiItemReader (small adaptation on original PoiItemReader) calling CustomPoiSheet:
public class CustomPoiItemReader<T> extends AbstractExcelItemReader<T> {
private Workbook workbook;
#Override
protected Sheet getSheet(final int sheet) {
return new CustomPoiSheet(this.workbook.getSheetAt(sheet));
}
public CustomPoiItemReader(){
super();
}
#Override
protected int getNumberOfSheets() {
return this.workbook.getNumberOfSheets();
}
#Override
protected void doClose() throws Exception {
super.doClose();
if (this.workbook != null) {
this.workbook.close();
}
this.workbook=null;
}
/**
* Open the underlying file using the {#code WorkbookFactory}. We keep track of the used {#code InputStream} so that
* it can be closed cleanly on the end of reading the file. This to be able to release the resources used by
* Apache POI.
*
* #param inputStream the {#code InputStream} pointing to the Excel file.
* #throws Exception is thrown for any errors.
*/
#Override
protected void openExcelFile(final InputStream inputStream) throws Exception {
this.workbook = WorkbookFactory.create(inputStream);
this.workbook.setMissingCellPolicy(Row.MissingCellPolicy.CREATE_NULL_AS_BLANK);
}
}
just change your code like this while reading data from excel.
dataObj.setField(Float.valueOf(rowSet.getColumnValue(idx)).intValue();
this is only working for Column A,B,C
Related
I am trying to implement the Hive UDF with Parameter and so I am extending GenericUDF class.
The problem is my UDF works find on String Datatype however it throws error if I run on other data types. I want UDF to run regardless of data type.
Would someone please let me know what's wrong with following code.
#Description(name = "Encrypt", value = "Encrypt the Given Column", extended = "SELECT Encrypt('Hello World!', 'Key');")
public class Encrypt extends GenericUDF {
StringObjectInspector key;
StringObjectInspector col;
#Override
public ObjectInspector initialize(ObjectInspector[] arguments) throws UDFArgumentException {
if (arguments.length != 2) {
throw new UDFArgumentLengthException("Encrypt only takes 2 arguments: T, String");
}
ObjectInspector keyObject = arguments[1];
ObjectInspector colObject = arguments[0];
if (!(keyObject instanceof StringObjectInspector)) {
throw new UDFArgumentException("Error: Key Type is Not String");
}
this.key = (StringObjectInspector) keyObject;
this.col = (StringObjectInspector) colObject;
return PrimitiveObjectInspectorFactory.javaStringObjectInspector;
}
#Override
public Object evaluate(DeferredObject[] deferredObjects) throws HiveException {
String keyString = key.getPrimitiveJavaObject(deferredObjects[1].get());
String colString = col.getPrimitiveJavaObject(deferredObjects[0].get());
return AES.encrypt(colString, keyString);
}
#Override
public String getDisplayString(String[] strings) {
return null;
}
}
Error
java.lang.ClassCastException: org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaIntObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector
I would suggest you to replace StringObjectInspector col with PrimitiveObjectInspector col and the corresponding cast this.col = (PrimitiveObjectInspector) colObject. Then there are two ways:
First is to process every possible Primitive type, like this
switch (((PrimitiveTypeInfo) colObject.getTypeInfo()).getPrimitiveCategory()) {
case BYTE:
case SHORT:
case INT:
case LONG:
case TIMESTAMP:
cast_long_type;
case FLOAT:
case DOUBLE:
cast_double_type;
case STRING:
everyting_is_fine;
case DECIMAL:
case BOOLEAN:
throw new UDFArgumentTypeException(0, "Unsupported yet");
default:
throw new UDFArgumentTypeException(0,
"Unsupported type");
}
}
Another way, is to use PrimitiveObjectInspectorUtils.getString method:
Object colObject = col.getPrimitiveJavaObject(deferredObjects[0].get());
String colString = PrimitiveObjectInspectorUtils.getString(colObject, key);
It just pseudocode like examples. Hope it helps.
I'm trying to work with the reddit JSON API. There are post data objects that contain a field called edited which may contain a boolean false if the post hasn't been edited, or a timestamp int if the post was edited.
Sometimes a boolean:
{
"edited": false,
"title": "Title 1"
}
Sometimes an int:
{
"edited": 1234567890,
"title": "Title 2"
}
When trying to parse the JSON where the POJO has the field set to int, I get an error: JsonDataException: Expected an int but was BOOLEAN...
How can I deal with this using Moshi?
I also ran into a similar problem where I had fields that were sometimes booleans, and sometimes ints. I wanted them to always be ints. Here's how I solved it with Moshi and kotlin:
Make a new annotation that you will use on fields to should convert from boolean to int
#JsonQualifier
#Retention(AnnotationRetention.RUNTIME)
#Target(AnnotationTarget.FIELD, AnnotationTarget.VALUE_PARAMETER, AnnotationTarget.FUNCTION)
annotation class ForceToInt
internal class ForceToIntJsonAdapter {
#ToJson
fun toJson(#ForceToInt i: Int): Int {
return i
}
#FromJson
#ForceToInt
fun fromJson(reader: JsonReader): Int {
return when (reader.peek()) {
JsonReader.Token.NUMBER -> reader.nextInt()
JsonReader.Token.BOOLEAN -> if (reader.nextBoolean()) 1 else 0
else -> {
reader.skipValue() // or throw
0
}
}
}
}
Use this annotation on the fields that you want to force to int:
#JsonClass(generateAdapter = true)
data class Discovery(
#Json(name = "id") val id: String = -1,
#ForceToInt #Json(name = "thanked") val thanked: Int = 0
)
The easy way might be to make your Java edited field an Object type.
The better way for performance, error catching, and appliaction usage is to use a custom JsonAdapter.
Example (edit as needed):
public final class Foo {
public final boolean edited;
public final int editedNumber;
public final String title;
public static final Object JSON_ADAPTER = new Object() {
final JsonReader.Options options = JsonReader.Options.of("edited", "title");
#FromJson Foo fromJson(JsonReader reader) throws IOException {
reader.beginObject();
boolean edited = true;
int editedNumber = -1;
String title = "";
while (reader.hasNext()) {
switch (reader.selectName(options)) {
case 0:
if (reader.peek() == JsonReader.Token.BOOLEAN) {
edited = reader.nextBoolean();
} else {
editedNumber = reader.nextInt();
}
break;
case 1:
title = reader.nextString();
break;
case -1:
reader.nextName();
reader.skipValue();
default:
throw new AssertionError();
}
}
reader.endObject();
return new Foo(edited, editedNumber, title);
}
#ToJson void toJson(JsonWriter writer, Foo value) throws IOException {
writer.beginObject();
writer.name("edited");
if (value.edited) {
writer.value(value.editedNumber);
} else {
writer.value(false);
}
writer.name("title");
writer.value(value.title);
writer.endObject();
}
};
Foo(boolean edited, int editedNumber, String title) {
this.edited = edited;
this.editedNumber = editedNumber;
this.title = title;
}
}
Don't forget to register the adapter on your Moshi instance.
Moshi moshi = new Moshi.Builder().add(Foo.JSON_ADAPTER).build();
JsonAdapter<Foo> fooAdapter = moshi.adapter(Foo.class);
Is it possible to change the record delimiter from newline to some other string so as to read a file with newlines into a single tuple in pig.
Yes.
A = LOAD '...' USING PigStorage(',') AS (...); //comma is the delimeter for fields
SET textinputformat.record.delimiter '<delimeter>'; // record delimeter, by default it is `\n`. You can change to any delimeter.
As mentioned here
You can use PigStorage
A = LOAD '/some/path/COMMA-DELIM-PREFIX*' USING PigStorage(',') AS (f1:chararray, ...);
B = LOAD '/some/path/SEMICOLON-DELIM-PREFIX*' USING PigStorage('\t') AS (f1:chararray, ...);
You can even try writing load/store UDF.
There is java code example for both load and store.
Load Functions : LoadFunc abstract class has the main methods for loading data and for most use cases it would suffice to extend it. You can read more here
Example
The loader implementation in the example is a loader for text data
with line delimiter as '\n' and '\t' as default field delimiter (which
can be overridden by passing a different field delimiter in the
constructor) - this is similar to current PigStorage loader in Pig.
The implementation uses an existing Hadoop supported Inputformat -
TextInputFormat - as the underlying InputFormat.
public class SimpleTextLoader extends LoadFunc {
protected RecordReader in = null;
private byte fieldDel = '\t';
private ArrayList<Object> mProtoTuple = null;
private TupleFactory mTupleFactory = TupleFactory.getInstance();
private static final int BUFFER_SIZE = 1024;
public SimpleTextLoader() {
}
/**
* Constructs a Pig loader that uses specified character as a field delimiter.
*
* #param delimiter
* the single byte character that is used to separate fields.
* ("\t" is the default.)
*/
public SimpleTextLoader(String delimiter) {
this();
if (delimiter.length() == 1) {
this.fieldDel = (byte)delimiter.charAt(0);
} else if (delimiter.length() > 1 & & delimiter.charAt(0) == '\\') {
switch (delimiter.charAt(1)) {
case 't':
this.fieldDel = (byte)'\t';
break;
case 'x':
fieldDel =
Integer.valueOf(delimiter.substring(2), 16).byteValue();
break;
case 'u':
this.fieldDel =
Integer.valueOf(delimiter.substring(2)).byteValue();
break;
default:
throw new RuntimeException("Unknown delimiter " + delimiter);
}
} else {
throw new RuntimeException("PigStorage delimeter must be a single character");
}
}
#Override
public Tuple getNext() throws IOException {
try {
boolean notDone = in.nextKeyValue();
if (!notDone) {
return null;
}
Text value = (Text) in.getCurrentValue();
byte[] buf = value.getBytes();
int len = value.getLength();
int start = 0;
for (int i = 0; i < len; i++) {
if (buf[i] == fieldDel) {
readField(buf, start, i);
start = i + 1;
}
}
// pick up the last field
readField(buf, start, len);
Tuple t = mTupleFactory.newTupleNoCopy(mProtoTuple);
mProtoTuple = null;
return t;
} catch (InterruptedException e) {
int errCode = 6018;
String errMsg = "Error while reading input";
throw new ExecException(errMsg, errCode,
PigException.REMOTE_ENVIRONMENT, e);
}
}
private void readField(byte[] buf, int start, int end) {
if (mProtoTuple == null) {
mProtoTuple = new ArrayList<Object>();
}
if (start == end) {
// NULL value
mProtoTuple.add(null);
} else {
mProtoTuple.add(new DataByteArray(buf, start, end));
}
}
#Override
public InputFormat getInputFormat() {
return new TextInputFormat();
}
#Override
public void prepareToRead(RecordReader reader, PigSplit split) {
in = reader;
}
#Override
public void setLocation(String location, Job job)
throws IOException {
FileInputFormat.setInputPaths(job, location);
}
}
Store Functions : StoreFunc abstract class has the main methods for storing data and for most use cases it should suffice to extend it
Example
The storer implementation in the example is a storer for text data
with line delimiter as '\n' and '\t' as default field delimiter (which
can be overridden by passing a different field delimiter in the
constructor) - this is similar to current PigStorage storer in Pig.
The implementation uses an existing Hadoop supported OutputFormat -
TextOutputFormat as the underlying OutputFormat.
public class SimpleTextStorer extends StoreFunc {
protected RecordWriter writer = null;
private byte fieldDel = '\t';
private static final int BUFFER_SIZE = 1024;
private static final String UTF8 = "UTF-8";
public PigStorage() {
}
public PigStorage(String delimiter) {
this();
if (delimiter.length() == 1) {
this.fieldDel = (byte)delimiter.charAt(0);
} else if (delimiter.length() > 1delimiter.charAt(0) == '\\') {
switch (delimiter.charAt(1)) {
case 't':
this.fieldDel = (byte)'\t';
break;
case 'x':
fieldDel =
Integer.valueOf(delimiter.substring(2), 16).byteValue();
break;
case 'u':
this.fieldDel =
Integer.valueOf(delimiter.substring(2)).byteValue();
break;
default:
throw new RuntimeException("Unknown delimiter " + delimiter);
}
} else {
throw new RuntimeException("PigStorage delimeter must be a single character");
}
}
ByteArrayOutputStream mOut = new ByteArrayOutputStream(BUFFER_SIZE);
#Override
public void putNext(Tuple f) throws IOException {
int sz = f.size();
for (int i = 0; i < sz; i++) {
Object field;
try {
field = f.get(i);
} catch (ExecException ee) {
throw ee;
}
putField(field);
if (i != sz - 1) {
mOut.write(fieldDel);
}
}
Text text = new Text(mOut.toByteArray());
try {
writer.write(null, text);
mOut.reset();
} catch (InterruptedException e) {
throw new IOException(e);
}
}
#SuppressWarnings("unchecked")
private void putField(Object field) throws IOException {
//string constants for each delimiter
String tupleBeginDelim = "(";
String tupleEndDelim = ")";
String bagBeginDelim = "{";
String bagEndDelim = "}";
String mapBeginDelim = "[";
String mapEndDelim = "]";
String fieldDelim = ",";
String mapKeyValueDelim = "#";
switch (DataType.findType(field)) {
case DataType.NULL:
break; // just leave it empty
case DataType.BOOLEAN:
mOut.write(((Boolean)field).toString().getBytes());
break;
case DataType.INTEGER:
mOut.write(((Integer)field).toString().getBytes());
break;
case DataType.LONG:
mOut.write(((Long)field).toString().getBytes());
break;
case DataType.FLOAT:
mOut.write(((Float)field).toString().getBytes());
break;
case DataType.DOUBLE:
mOut.write(((Double)field).toString().getBytes());
break;
case DataType.BYTEARRAY: {
byte[] b = ((DataByteArray)field).get();
mOut.write(b, 0, b.length);
break;
}
case DataType.CHARARRAY:
// oddly enough, writeBytes writes a string
mOut.write(((String)field).getBytes(UTF8));
break;
case DataType.MAP:
boolean mapHasNext = false;
Map<String, Object> m = (Map<String, Object>)field;
mOut.write(mapBeginDelim.getBytes(UTF8));
for(Map.Entry<String, Object> e: m.entrySet()) {
if(mapHasNext) {
mOut.write(fieldDelim.getBytes(UTF8));
} else {
mapHasNext = true;
}
putField(e.getKey());
mOut.write(mapKeyValueDelim.getBytes(UTF8));
putField(e.getValue());
}
mOut.write(mapEndDelim.getBytes(UTF8));
break;
case DataType.TUPLE:
boolean tupleHasNext = false;
Tuple t = (Tuple)field;
mOut.write(tupleBeginDelim.getBytes(UTF8));
for(int i = 0; i < t.size(); ++i) {
if(tupleHasNext) {
mOut.write(fieldDelim.getBytes(UTF8));
} else {
tupleHasNext = true;
}
try {
putField(t.get(i));
} catch (ExecException ee) {
throw ee;
}
}
mOut.write(tupleEndDelim.getBytes(UTF8));
break;
case DataType.BAG:
boolean bagHasNext = false;
mOut.write(bagBeginDelim.getBytes(UTF8));
Iterator<Tuple> tupleIter = ((DataBag)field).iterator();
while(tupleIter.hasNext()) {
if(bagHasNext) {
mOut.write(fieldDelim.getBytes(UTF8));
} else {
bagHasNext = true;
}
putField((Object)tupleIter.next());
}
mOut.write(bagEndDelim.getBytes(UTF8));
break;
default: {
int errCode = 2108;
String msg = "Could not determine data type of field: " + field;
throw new ExecException(msg, errCode, PigException.BUG);
}
}
}
#Override
public OutputFormat getOutputFormat() {
return new TextOutputFormat<WritableComparable, Text>();
}
#Override
public void prepareToWrite(RecordWriter writer) {
this.writer = writer;
}
#Override
public void setStoreLocation(String location, Job job) throws IOException {
job.getConfiguration().set("mapred.textoutputformat.separator", "");
FileOutputFormat.setOutputPath(job, new Path(location));
if (location.endsWith(".bz2")) {
FileOutputFormat.setCompressOutput(job, true);
FileOutputFormat.setOutputCompressorClass(job, BZip2Codec.class);
} else if (location.endsWith(".gz")) {
FileOutputFormat.setCompressOutput(job, true);
FileOutputFormat.setOutputCompressorClass(job, GzipCodec.class);
}
}
}
I work on Java/HTML project. I've set a hashmap as a session attribute. I request the hashmap from session and put key/value in it
map.put("some string", "1")
. When this action is performed the second time, I print the hashmap content and the only value, that was '1' after the last operation on the hashmap, becomes '-1'.
Hashmap is the best data structure, in my opinion, for this project. Can anyone help?
public class Cart {
private HashMap<String, asd> list;
public Cart(){
list = new HashMap<String, asd>();
}
public HashMap<String, asd> getMap(){
return list;
}
/*
* Parameter: code
* -1: increase quantity by 1
* 0: delete product from the product list
* else: set the product quantity to the passed value
*/
public void alterProduct(int code, String product){
if(list.containsKey(product)) {
if(code == -1) plusOne(product);
if(code == 0) remove(product);
else setValue(product, code);
}else {
asd asd = new asd();
asd.a = 1;
list.put(product, asd);
}
}
private void plusOne(String product){
asd asd = list.get(product);
asd.a = asd.a + 1;
list.put(product, asd);
}
private void remove(String product){
list.remove(product);
}
private void setValue(String product, int code){
asd asd = new asd();
asd.a = code;
list.put(product, asd);
}
}
class asd{
int a;
public String toString(){
return ""+a;
}
}
JSP code where I set Cart object as session attribute:
<%
Cart myCart = new Cart();
session.setAttribute("cart",myCart);
%>
Servlet code:
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
Cart cart = (Cart) request.getSession().getAttribute("cart");
cart.alterProduct(-1, (String) request.getSession().getAttribute("name"));
doGet(request, response);
}
After I call alterProduct the second time for the same (String) request.getSession().getAttribute("name") the hashmap value for the same key is '-1'.
What is type/value of product? How it is connected to the "cart"?
I guess what's happen that you mess up data types. Another option is you have bug in the Cart.toString() method. I suggest you change the code with "plain" data type and recheck it. If it fails, use your Cart class without messy conversion and debug the code.
You have bug here:
public void alterProduct(int code, String product){
if(list.containsKey(product)) {
if(code == -1) plusOne(product);
if(code == 0) remove(product);
else setValue(product, code);
}
private void setValue(String product, int code){
asd asd = new asd();
asd.a = code;
list.put(product, asd);
}
Consider what happen when you call art.alterProduct(-1, "something") second time.
list.containsKey(product) is true (you use the same product"), code is -1. So
if(code == -1) plusOne(product); is executed as expected.
But then you have something weired
if(code == 0) remove(product);
else setValue(product, code);
code ==0 is evaluated to false, so else instruction is called. You are calling setValue(product, -1)
As you can see above setValue will assign -1 to the asd.a that is observed by you.
I've got an issue with SortableDataProvider and DataTable in wicket.
I've defined my DataTable as such:
IColumn<Column>[] columns = new IColumn[9];
//column values are mapped to the private attributes listed in ColumnImpl.java
columns[0] = new PropertyColumn<Column>(new Model<String>("#"), "columnPosition", "columnPosition");
columns[1] = new PropertyColumn<Column>(new Model<String>("Description"), "description");
columns[2] = new PropertyColumn<Column>(new Model<String>("Type"), "dataType", "dataType");
Adding it to the table:
DataTable<Column> dataTable = new DataTable<Column>("columnsTable", columns, provider, maxRowsPerPage) {
#Override
protected Item<Column> newRowItem(String id, int index, IModel<Column> model) {
return new OddEvenItem<Column>(id, index, model);
}
};
My data provider:
public class ColumnSortableDataProvider extends SortableDataProvider<Column> {
private static final long serialVersionUID = 1L;
private List<Column> list = null;
public ColumnSortableDataProvider(Table table, String sortProperty) {
this.list = Arrays.asList(table.getColumns().toArray(new Column[0]));
setSort(sortProperty, true);
}
public ColumnSortableDataProvider(List<Column> list, String sortProperty) {
this.list = list;
setSort(sortProperty, true);
}
#Override
public Iterator<? extends Column> iterator(int first, int count) {
/*
first - first row of data
count - minimum number of elements to retrieve
So this method returns an iterator capable of iterating over {first, first+count} items
*/
Iterator<Column> iterator = null;
try {
if(getSort() != null) {
Collections.sort(list, new Comparator<Column>() {
private static final long serialVersionUID = 1L;
#Override
public int compare(Column c1, Column c2) {
int result=1;
PropertyModel<Comparable> model1= new PropertyModel<Comparable>(c1, getSort().getProperty());
PropertyModel<Comparable> model2= new PropertyModel<Comparable>(c2, getSort().getProperty());
if(model1.getObject() == null && model2.getObject() == null)
result = 0;
else if(model1.getObject() == null)
result = 1;
else if(model2.getObject() == null)
result = -1;
else
result = ((Comparable)model1.getObject()).compareTo(model2.getObject());
result = getSort().isAscending() ? result : -result;
return result;
}
});
}
if (list.size() > (first+count))
iterator = list.subList(first, first+count).iterator();
else
iterator = list.iterator();
}
catch (Exception e) {
e.printStackTrace();
}
return iterator;
}
The problem is the following:
- I click a column header to sort by that column.
- I navigate to a different page
- I click Back (or Forward if I do the opposite scenario)
- Page has expired.
It'd be nice to generate the page using PageParameters but I somehow need to intercept the sort event to do so.
Any pointers would be greatly appreciated. Thanks a ton!!
I don't know at a quick glance what might be causing this, but in order to help diagnose, you might want to enable debug logging for org.apache.wicket.Session or possibly more of the wicket code.
The retrieval of a page definitely involves calls to a method
public final Page getPage(final String pageMapName, final String path, final int versionNumber)
in this class, and it has some debug logging.
For help with setting up this logging, have a look at How to initialize log4j properly? or at the docs for log4j.