I'm running a code that uses Gson Converter with simple date format function and once in a while the date formatting is messes up either it's displaying date back in 1969-1970 depending on time zone or it takes it and displays some random date.
static class DateSerializer implements JsonDeserializer<Date> {
#Override
public Date deserialize(JsonElement jsonElement, Type typeOF, JsonDeserializationContext context)
throws JsonParseException {
for (SimpleDateFormat simpleDateFormat : DATE_FORMATS) {
try {
simpleDateFormat.setLenient(true);
return simpleDateFormat.parse(jsonElement.getAsString());
} catch (ParseException e) {
}
}
return null;
}
}
static {
final TimeZone GMT_TIMEZONE = TimeZone.getTimeZone("GMT");
int i = 0;
for (String format : DATE_FORMAT_STRINGS) {
SimpleDateFormat dateFormat = new SimpleDateFormat(format, Locale.US);
dateFormat.setTimeZone(GMT_TIMEZONE);
DATE_FORMATS[i++] = dateFormat;
}
}
Gson gson = new GsonBuilder()
.registerTypeAdapter(Date.class, new DateSerializer())
.create();
private static final String[] DATE_FORMAT_STRINGS = new String[]{"yyyy-MM-dd'T'HH:mm:ssZZZZ",
"yyyy-MM-dd'T'HH:mm:ss'Z'"};
The problem is that SimpleDateFormat is not thread-safe. Your deserialization is happening across multiple threads to improve performance, but due to the non-thread-safety of SimpleDateFormat, you occasionally will get garbage back in your parsed dates.
Two options for solving this problem are creating a new SimpleDateFormat each time you need it, or enforcing atomicity by doing something such as creating a lock on your date format.
For example, GSON's DefaultDateTypeAdapter takes the latter approach.
The setLenient probably causes it to parse dates in a weird way, depending on the exact format. It's probably better to be more strict with the formats you accept and keep setLenient to false.
Related
I have a task to format the amount according to the currency specification for multiple currency.
I am trying below logic to format the amount:
import java.text.NumberFormat;
import java.util.Locale;
public class amountFormatter {
public static void main(String[] args) {
double payment = 12.125f;
Locale US = new Locale ("en","US");
NumberFormat nfUS = NumberFormat.getCurrencyInstance(US);
System.out.println("amount in US format: " + nfUS.format(payment));
}
}
The output I am getting here is as below:
amount in US format: $12.12
However I need the output as below:
amount in US format: USD12.12
The above code is working fine for other country code is locale like en_JP, en_AE which return me Value as JPY12, AED12.12
This behaviour is expected because DecimalFormat uses DecimalFormatSymbols#getCurrencySymbol to display the currency.
DecimalFormat.java, ll. 2817
case CURRENCY_SIGN:
if (i<pattern.length() &&
pattern.charAt(i) == CURRENCY_SIGN) {
++i;
buffer.append(symbols.getInternationalCurrencySymbol());
} else {
buffer.append(symbols.getCurrencySymbol());
}
continue;
To change that behaviour you could extend the class and override the two format functions (parse as well but it will just delegate):
public class CurrencyCodeNumberFormat extends NumberFormat {
private final NumberFormat format;
public CurrencyCodeNumberFormat(final NumberFormat format) {
this.format = format;
}
#Override
public StringBuffer format(double number, StringBuffer result, FieldPosition fieldPosition) {
final String unmodified = this.format.format(number, result, fieldPosition).toString();
// here comes the magic.. take the default parse result and search and replace the currency symbol with its currency code
final String modified = unmodified.replace(this.format.getCurrency().getSymbol(), this.format.getCurrency().getCurrencyCode());
return new StringBuffer(modified);
}
#Override
public StringBuffer format(long number, StringBuffer toAppendTo, FieldPosition pos) {
final String unmodified = this.format.format(number, toAppendTo, pos).toString();
// here comes the magic.. take the default parse result and search and replace the currency symbol with its currency code
final String modified = unmodified.replace(this.format.getCurrency().getSymbol(), this.format.getCurrency().getCurrencyCode());
return new StringBuffer(modified);
}
#Override
public Number parse(String source, ParsePosition parsePosition) {
return this.format.parse(source, parsePosition);
}
}
Because this aproach uses the decorator pattern (like the Input/OutputStream classes in JavaSE) you simply have to change your
NumberFormat nfUS = NumberFormat.getCurrencyInstance(US);
to
NumberFormat nfUS = new CurrencyCodeNumberFormat(NumberFormat.getCurrencyInstance(US));
and you get
amount in US format: USD12.12
My client retrieves JSON content as below:
{
"table": "tablename",
"update": 1495104575669,
"rows": [
{"column5": 11, "column6": "yyy"},
{"column3": 22, "column4": "zzz"}
]
}
In rows array content, the key is not fixed. I want to retrieve the key and value and save into a Map using Gson 2.8.x.
How can I configure Gson to simply use to deserialize?
Here is my idea:
public class Dataset {
private String table;
private long update;
private List<Rows>> lists; <-- little confused here.
or private List<HashMap<String,Object> lists
Setter/Getter
}
public class Rows {
private HashMap<String, Object> map;
....
}
Dataset k = gson.fromJson(jsonStr, Dataset.class);
log.info(k.getRows().size()); <-- I got two null object
Thanks.
Gson does not support such a thing out of box. It would be nice, if you can make the property name fixed. If not, then you can have a few options that probably would help you.
Just rename the Dataset.lists field to Dataset.rows, if the property name is fixed, rows.
If the possible name set is known in advance, suggest Gson to pick alternative names using the #SerializedName.
If the possible name set is really unknown and may change in the future, you might want to try to make it fully dynamic using a custom TypeAdapter (streaming mode; requires less memory, but harder to use) or a custom JsonDeserializer (object mode; requires more memory to store intermediate tree views, but it's easy to use) registered with GsonBuilder.
For option #2, you can simply add the names of name alternatives:
#SerializedName(value = "lists", alternate = "rows")
final List<Map<String, Object>> lists;
For option #3, bind a downstream List<Map<String, Object>> type adapter trying to detect the name dynamically. Note that I omit the Rows class deserialization strategy for simplicity (and I believe you might want to remove the Rows class in favor of simple Map<String, Object> (another note: use Map, try not to specify collection implementations -- hash maps are unordered, but telling Gson you're going to deal with Map would let it to pick an ordered map like LinkedTreeMap (Gson internals) or LinkedHashMap that might be important for datasets)).
// Type tokens are immutable and can be declared constants
private static final TypeToken<String> stringTypeToken = new TypeToken<String>() {
};
private static final TypeToken<Long> longTypeToken = new TypeToken<Long>() {
};
private static final TypeToken<List<Map<String, Object>>> stringToObjectMapListTypeToken = new TypeToken<List<Map<String, Object>>>() {
};
private static final Gson gson = new GsonBuilder()
.registerTypeAdapterFactory(new TypeAdapterFactory() {
#Override
public <T> TypeAdapter<T> create(final Gson gson, final TypeToken<T> typeToken) {
if ( typeToken.getRawType() != Dataset.class ) {
return null;
}
// If the actual type token represents the Dataset class, then pick the bunch of downstream type adapters
final TypeAdapter<String> stringTypeAdapter = gson.getDelegateAdapter(this, stringTypeToken);
final TypeAdapter<Long> primitiveLongTypeAdapter = gson.getDelegateAdapter(this, longTypeToken);
final TypeAdapter<List<Map<String, Object>>> stringToObjectMapListTypeAdapter = stringToObjectMapListTypeToken);
// And compose the bunch into a single dataset type adapter
final TypeAdapter<Dataset> datasetTypeAdapter = new TypeAdapter<Dataset>() {
#Override
public void write(final JsonWriter out, final Dataset dataset) {
// Omitted for brevity
throw new UnsupportedOperationException();
}
#Override
public Dataset read(final JsonReader in)
throws IOException {
in.beginObject();
String table = null;
long update = 0;
List<Map<String, Object>> lists = null;
while ( in.hasNext() ) {
final String name = in.nextName();
switch ( name ) {
case "table":
table = stringTypeAdapter.read(in);
break;
case "update":
update = primitiveLongTypeAdapter.read(in);
break;
default:
lists = stringToObjectMapListTypeAdapter.read(in);
break;
}
}
in.endObject();
return new Dataset(table, update, lists);
}
}.nullSafe(); // Making the type adapter null-safe
#SuppressWarnings("unchecked")
final TypeAdapter<T> typeAdapter = (TypeAdapter<T>) datasetTypeAdapter;
return typeAdapter;
}
})
.create();
final Dataset dataset = gson.fromJson(jsonReader, Dataset.class);
System.out.println(dataset.lists);
The code above would print then:
[{column5=11.0, column6=yyy}, {column3=22.0, column4=zzz}]
I try to configure Gson as my JSON mapper to accept "snake_case" query parameter, and translate them into standard Java "camelCase" parameters.
First of all, I know I could use the #SerializedName annotation to customise the serialized name of each field, but this will involve some manual work.
After doing some search, I believe the following approach should work (please correct me if I am wrong).
Use Gson as the default JSON mapper of Spring Boot
spring.http.converters.preferred-json-mapper=gson
Configuring Gson before GsonHttpMessageConverter is created as described here
Customising the Gson naming policy in step 2 according to GSON Field Naming Policy
private GsonHttpMessageConverter createGsonHttpMessageConverter() {
Gson gson = new GsonBuilder()
.setFieldNamingPolicy(FieldNamingPolicy.LOWER_CASE_WITH_UNDERSCORES)
.create();
GsonHttpMessageConverter gsonConverter = new GsonHttpMessageConverter();
gsonConverter.setGson(gson);
return gsonConverter;
}
Then I create a simple controller like this:
#RequestMapping(value = "/example/gson-naming-policy")
public Object testNamingPolicy(ExampleParam data) {
return data.getCamelCase();
}
With the following Param class:
import lombok.Data;
#Data
public class ExampleParam {
private String camelCase;
}
But when I call the controller with query parameter ?camel_case=hello, the data.camelCase could not been populated (and it's null). When I change the query parameters to ?camelCase=hello then it could be set, which mean my setting is not working as expected.
Any hint would be highly appreciated. Thanks in advance!
It's a nice question. If I understand how Spring MVC works behind the scenes, no HTTP converters are used for #ModelAttribute-driven. It can be inspected easily when throwing an exception from your ExampleParam constructor or the ExampleParam.setCamelCase method (de-Lombok first) -- Spring uses its bean utilities that use public (!) ExampleParam.setCamelCase to set the DTO value. Another proof is that no Gson.fromJson is never invoked regardless how your Gson converter is configured. So, your camelCase confuses you because the default Gson instance uses this strategy as well as Spring does -- so this is just a matter of confusion.
In order to make it work, you have to create a custom Gson-aware HandlerMethodArgumentResolver implementation. Let's assume we support POJO only (not lists, maps or primitives).
#Configuration
#EnableWebMvc
class WebMvcConfiguration
extends WebMvcConfigurerAdapter {
private static final Gson gson = new GsonBuilder()
.setFieldNamingPolicy(LOWER_CASE_WITH_UNDERSCORES)
.create();
#Override
public void addArgumentResolvers(final List<HandlerMethodArgumentResolver> argumentResolvers) {
argumentResolvers.add(new HandlerMethodArgumentResolver() {
#Override
public boolean supportsParameter(final MethodParameter parameter) {
// It must be never a primitive, array, string, boxed number, map or list -- and whatever you configure ;)
final Class<?> parameterType = parameter.getParameterType();
return !parameterType.isPrimitive()
&& !parameterType.isArray()
&& parameterType != String.class
&& !Number.class.isAssignableFrom(parameterType)
&& !Map.class.isAssignableFrom(parameterType)
&& !List.class.isAssignableFrom(parameterType);
}
#Override
public Object resolveArgument(final MethodParameter parameter, final ModelAndViewContainer mavContainer, final NativeWebRequest webRequest,
final WebDataBinderFactory binderFactory) {
// Now we're deconstructing the request parameters creating a JSON tree, because Gson can convert from JSON trees to POJOs transparently
// Also note parameter.getGenericParameterType() -- it's better that Class<?> that cannot hold generic types parameterization
return gson.fromJson(
parameterMapToJsonElement(webRequest.getParameterMap()),
parameter.getGenericParameterType()
);
}
});
}
...
private static JsonElement parameterMapToJsonElement(final Map<String, String[]> parameters) {
final JsonObject jsonObject = new JsonObject();
for ( final Entry<String, String[]> e : parameters.entrySet() ) {
final String key = e.getKey();
final String[] value = e.getValue();
final JsonElement jsonValue;
switch ( value.length ) {
case 0:
// As far as I understand, this must never happen, but I'm not sure
jsonValue = JsonNull.INSTANCE;
break;
case 1:
// If there's a single value only, let's convert it to a string literal
// Gson is good at "weak typing": strings can be parsed automatically to numbers and booleans
jsonValue = new JsonPrimitive(value[0]);
break;
default:
// If there are more than 1 element -- make it an array
final JsonArray jsonArray = new JsonArray();
for ( int i = 0; i < value.length; i++ ) {
jsonArray.add(value[i]);
}
jsonValue = jsonArray;
break;
}
jsonObject.add(key, jsonValue);
}
return jsonObject;
}
}
So, here are the results:
http://localhost:8080/?camelCase=hello => (empty)
http://localhost:8080/?camel_case=hello => "hello"
I am trying to execute a very simple query from Spring JDBCTemplate. I am retrieving one attribute from a record that is identified by primary key. The entirely of the code is shown below. When I do this with a query constructed by concatenation (dangerous and ugly, and currently uncommented) it executes in 0.1 second. When I change my comments and use the parameterized query it executes in 50 seconds. I would much prefer to get the protection that comes with the parameterized query, however 50 seconds seems like a steep price to pay. Any hints how this could be made more reasonable.
public class JdbcEventDaoImpl {
private static JdbcTemplate jtemp;
private static PreparedStatement getJsonStatement;
private static final Logger logger = LoggerFactory.getLogger(JdbcEventDaoImpl.class);
#Autowired
public void setDataSource(DataSource dataSource) {
JdbcEventDaoImpl.jtemp = new JdbcTemplate(dataSource);
}
public String getJdbcForPosting(String aggregationId){
try {
return (String) JdbcEventDaoImpl.jtemp.queryForObject("select PostingJson from PostingCollection where AggregationId = '" + aggregationId + "'", String.class);
//return (String) JdbcEventDaoImpl.jtemp.queryForObject("select PostingJson from PostingCollection where AggregationId = ?", aggregationId, String.class);
} catch (EmptyResultDataAccessException e){
return "Not Available";
}
}
}
I have an app that works with lists of data. The data is held in three classes as shown below. My problem is reading and writing the data to disk takes too long for the app to be useful past about 1,000 entries. An example would be 1,000 flashcards. File on disk is about 200K and takes about six seconds to load. An impressive 35k a second. I was hoping to be able to support ten's of thousands of entries but clearly users'attention span would time out with a minute long wait. A surprisingly long part of the time is spent after the data has been read and loaded into a linerlayout, which is in a scrollview, and the time the screen refreshes. This is about 3 of the six seconds. I've been looking at different alternatives such as parcelable (can't write to disk) Kyro and others but the benchmarks are not that impressive. If anyone can offer me advice or guidance I would really appreciate it. Code works great - just really slow. Here are the data structures of the classes and the code I use to write and read.
Thanks,
Chris
public class iList extends Activity implements Serializable {
String listName;
int lastPosition;
long dateSaved;
String listPath;
List<iSet> listLayout=new ArrayList<iSet>();
List<String> operationsHistory = new ArrayList<String>();
List<iSet>setList= new ArrayList<iSet>();
public class iSet extends Activity implements Serializable {
private static final long serialVersionUID = 2L;
String setName;
List<Objecti> setObjectis = new ArrayList<Objecti>();
List<iList> setLists = new ArrayList<iList>();
boolean hasList=false;
int listCount=0;
public class Objecti extends Activity implements Serializable {
private static final long serialVersionUID = 2L;
private String objectName;
private int objectType;
private String stringValue;
private Date dateValue;
private float floatValue;
private int integerValue;
private ImageView image;
public iList readList(String pathName) {
File f = new File(pathName);
iList list = new iList();
ObjectInputStream objectinputstream = null;
try {
FileInputStream fis = new FileInputStream(pathName);
if (fis.available() > 0) {
ObjectInputStream ois = new ObjectInputStream(fis);
list = (iList) ois.readObject();
ois.close();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (objectinputstream != null) {
}
}
return list;
}
public void save(String filePath){
try {
File f = new File(filePath);
FileOutputStream fout = new FileOutputStream(filePath);
ObjectOutputStream oos = new ObjectOutputStream(fout);
oos.writeObject(this);
oos.flush();
oos.close();
} catch (IOException e) {
e.printStackTrace();
}
}
Looking at the class structure. You may think of following
1. Reduce the no of data members being serialized. This you can do by providing a good design for this.
2. Below are the association relations I could infer
iSet = { *objectI, *iList}
iList= {*iSet }
I could not make out the context why would this type of association is required. This looks like a cycle which may result in duplicate data being serialzed. By providing a design you can serialize only minimum data required. Remaining you can re-compute after de-serialization is done. This will shorten each IO operation.
After much research the answer I've come up with is you can't make objectinput/output any faster without doing it yourself. Because of the structure of my data one iList will have many of one type of iSet which has multiple objects with the data residing in the object. So I write all of my objects to a .txt file and when I need to restore the object I read them from the .txt and put the list back together again. The results are extreme - ranging from 15 - 20 times faster which more than meets my requirements.