Gson: How do I deserialize an inner JSON object to a map if the property name is not fixed? - gson

My client retrieves JSON content as below:
{
"table": "tablename",
"update": 1495104575669,
"rows": [
{"column5": 11, "column6": "yyy"},
{"column3": 22, "column4": "zzz"}
]
}
In rows array content, the key is not fixed. I want to retrieve the key and value and save into a Map using Gson 2.8.x.
How can I configure Gson to simply use to deserialize?
Here is my idea:
public class Dataset {
private String table;
private long update;
private List<Rows>> lists; <-- little confused here.
or private List<HashMap<String,Object> lists
Setter/Getter
}
public class Rows {
private HashMap<String, Object> map;
....
}
Dataset k = gson.fromJson(jsonStr, Dataset.class);
log.info(k.getRows().size()); <-- I got two null object
Thanks.

Gson does not support such a thing out of box. It would be nice, if you can make the property name fixed. If not, then you can have a few options that probably would help you.
Just rename the Dataset.lists field to Dataset.rows, if the property name is fixed, rows.
If the possible name set is known in advance, suggest Gson to pick alternative names using the #SerializedName.
If the possible name set is really unknown and may change in the future, you might want to try to make it fully dynamic using a custom TypeAdapter (streaming mode; requires less memory, but harder to use) or a custom JsonDeserializer (object mode; requires more memory to store intermediate tree views, but it's easy to use) registered with GsonBuilder.
For option #2, you can simply add the names of name alternatives:
#SerializedName(value = "lists", alternate = "rows")
final List<Map<String, Object>> lists;
For option #3, bind a downstream List<Map<String, Object>> type adapter trying to detect the name dynamically. Note that I omit the Rows class deserialization strategy for simplicity (and I believe you might want to remove the Rows class in favor of simple Map<String, Object> (another note: use Map, try not to specify collection implementations -- hash maps are unordered, but telling Gson you're going to deal with Map would let it to pick an ordered map like LinkedTreeMap (Gson internals) or LinkedHashMap that might be important for datasets)).
// Type tokens are immutable and can be declared constants
private static final TypeToken<String> stringTypeToken = new TypeToken<String>() {
};
private static final TypeToken<Long> longTypeToken = new TypeToken<Long>() {
};
private static final TypeToken<List<Map<String, Object>>> stringToObjectMapListTypeToken = new TypeToken<List<Map<String, Object>>>() {
};
private static final Gson gson = new GsonBuilder()
.registerTypeAdapterFactory(new TypeAdapterFactory() {
#Override
public <T> TypeAdapter<T> create(final Gson gson, final TypeToken<T> typeToken) {
if ( typeToken.getRawType() != Dataset.class ) {
return null;
}
// If the actual type token represents the Dataset class, then pick the bunch of downstream type adapters
final TypeAdapter<String> stringTypeAdapter = gson.getDelegateAdapter(this, stringTypeToken);
final TypeAdapter<Long> primitiveLongTypeAdapter = gson.getDelegateAdapter(this, longTypeToken);
final TypeAdapter<List<Map<String, Object>>> stringToObjectMapListTypeAdapter = stringToObjectMapListTypeToken);
// And compose the bunch into a single dataset type adapter
final TypeAdapter<Dataset> datasetTypeAdapter = new TypeAdapter<Dataset>() {
#Override
public void write(final JsonWriter out, final Dataset dataset) {
// Omitted for brevity
throw new UnsupportedOperationException();
}
#Override
public Dataset read(final JsonReader in)
throws IOException {
in.beginObject();
String table = null;
long update = 0;
List<Map<String, Object>> lists = null;
while ( in.hasNext() ) {
final String name = in.nextName();
switch ( name ) {
case "table":
table = stringTypeAdapter.read(in);
break;
case "update":
update = primitiveLongTypeAdapter.read(in);
break;
default:
lists = stringToObjectMapListTypeAdapter.read(in);
break;
}
}
in.endObject();
return new Dataset(table, update, lists);
}
}.nullSafe(); // Making the type adapter null-safe
#SuppressWarnings("unchecked")
final TypeAdapter<T> typeAdapter = (TypeAdapter<T>) datasetTypeAdapter;
return typeAdapter;
}
})
.create();
final Dataset dataset = gson.fromJson(jsonReader, Dataset.class);
System.out.println(dataset.lists);
The code above would print then:
[{column5=11.0, column6=yyy}, {column3=22.0, column4=zzz}]

Related

Configuring snake_case query parameter using Gson in Spring Boot

I try to configure Gson as my JSON mapper to accept "snake_case" query parameter, and translate them into standard Java "camelCase" parameters.
First of all, I know I could use the #SerializedName annotation to customise the serialized name of each field, but this will involve some manual work.
After doing some search, I believe the following approach should work (please correct me if I am wrong).
Use Gson as the default JSON mapper of Spring Boot
spring.http.converters.preferred-json-mapper=gson
Configuring Gson before GsonHttpMessageConverter is created as described here
Customising the Gson naming policy in step 2 according to GSON Field Naming Policy
private GsonHttpMessageConverter createGsonHttpMessageConverter() {
Gson gson = new GsonBuilder()
.setFieldNamingPolicy(FieldNamingPolicy.LOWER_CASE_WITH_UNDERSCORES)
.create();
GsonHttpMessageConverter gsonConverter = new GsonHttpMessageConverter();
gsonConverter.setGson(gson);
return gsonConverter;
}
Then I create a simple controller like this:
#RequestMapping(value = "/example/gson-naming-policy")
public Object testNamingPolicy(ExampleParam data) {
return data.getCamelCase();
}
With the following Param class:
import lombok.Data;
#Data
public class ExampleParam {
private String camelCase;
}
But when I call the controller with query parameter ?camel_case=hello, the data.camelCase could not been populated (and it's null). When I change the query parameters to ?camelCase=hello then it could be set, which mean my setting is not working as expected.
Any hint would be highly appreciated. Thanks in advance!
It's a nice question. If I understand how Spring MVC works behind the scenes, no HTTP converters are used for #ModelAttribute-driven. It can be inspected easily when throwing an exception from your ExampleParam constructor or the ExampleParam.setCamelCase method (de-Lombok first) -- Spring uses its bean utilities that use public (!) ExampleParam.setCamelCase to set the DTO value. Another proof is that no Gson.fromJson is never invoked regardless how your Gson converter is configured. So, your camelCase confuses you because the default Gson instance uses this strategy as well as Spring does -- so this is just a matter of confusion.
In order to make it work, you have to create a custom Gson-aware HandlerMethodArgumentResolver implementation. Let's assume we support POJO only (not lists, maps or primitives).
#Configuration
#EnableWebMvc
class WebMvcConfiguration
extends WebMvcConfigurerAdapter {
private static final Gson gson = new GsonBuilder()
.setFieldNamingPolicy(LOWER_CASE_WITH_UNDERSCORES)
.create();
#Override
public void addArgumentResolvers(final List<HandlerMethodArgumentResolver> argumentResolvers) {
argumentResolvers.add(new HandlerMethodArgumentResolver() {
#Override
public boolean supportsParameter(final MethodParameter parameter) {
// It must be never a primitive, array, string, boxed number, map or list -- and whatever you configure ;)
final Class<?> parameterType = parameter.getParameterType();
return !parameterType.isPrimitive()
&& !parameterType.isArray()
&& parameterType != String.class
&& !Number.class.isAssignableFrom(parameterType)
&& !Map.class.isAssignableFrom(parameterType)
&& !List.class.isAssignableFrom(parameterType);
}
#Override
public Object resolveArgument(final MethodParameter parameter, final ModelAndViewContainer mavContainer, final NativeWebRequest webRequest,
final WebDataBinderFactory binderFactory) {
// Now we're deconstructing the request parameters creating a JSON tree, because Gson can convert from JSON trees to POJOs transparently
// Also note parameter.getGenericParameterType() -- it's better that Class<?> that cannot hold generic types parameterization
return gson.fromJson(
parameterMapToJsonElement(webRequest.getParameterMap()),
parameter.getGenericParameterType()
);
}
});
}
...
private static JsonElement parameterMapToJsonElement(final Map<String, String[]> parameters) {
final JsonObject jsonObject = new JsonObject();
for ( final Entry<String, String[]> e : parameters.entrySet() ) {
final String key = e.getKey();
final String[] value = e.getValue();
final JsonElement jsonValue;
switch ( value.length ) {
case 0:
// As far as I understand, this must never happen, but I'm not sure
jsonValue = JsonNull.INSTANCE;
break;
case 1:
// If there's a single value only, let's convert it to a string literal
// Gson is good at "weak typing": strings can be parsed automatically to numbers and booleans
jsonValue = new JsonPrimitive(value[0]);
break;
default:
// If there are more than 1 element -- make it an array
final JsonArray jsonArray = new JsonArray();
for ( int i = 0; i < value.length; i++ ) {
jsonArray.add(value[i]);
}
jsonValue = jsonArray;
break;
}
jsonObject.add(key, jsonValue);
}
return jsonObject;
}
}
So, here are the results:
http://localhost:8080/?camelCase=hello => (empty)
http://localhost:8080/?camel_case=hello => "hello"

Define prefix root node for Jackson Serialization/Deserialization YAML document to POJO with Prefix

I found https://github.com/FasterXML/jackson-dataformat-yaml to deserialize/serialize YAML files. However, I'm having a hard time to deserialize/serialize the following:
I want to define a prefix to the actual document to be parsed as a POJO. Similar to a subtree of the document.
I want to define the POJO that represents the simple object representation instead of creating multiple objects.
The Error "Unrecognized field "spring" (class ConfigServerProperties), not marked as ignorable (one known property: "repos"])" is shown. But I don't know how to represent the prefix "spring.cloud.config.server.git" to be the root element of the POJO.
Document
spring:
cloud:
config:
server:
git:
repos:
publisher:
uri: 'https://github.company.com/toos/spring-cloud-config-publisher-config'
cloneOnStart: true
username: myuser
password: password
pullOnRequest: false
differentProperty: My Value
config_test_server_config:
uri: 'https://github.company.com/mdesales/config-test-server-config'
cloneOnStart: true
username: 226b4bb85aa131cd6393acee9c484ec426111d16
password: ""
completelyDifferentProp: this is a different one
For this document, the requirements are as follows:
* I want to define the prefix as "spring.cloud.config.server.git".
* I want to create a POJO that represents the object.
POJO
I created the following POJOs to represent this.
ConfigServerProperties: represents the top pojo containing the list of repos.
ConfigServerOnboard: represents each of the elements of the document.
Each properties are stored in a map, so that we can add as many different properties as possible.
Each class is as follows:
public class ConfigServerProperties {
private Map<String, ConfigServerOnboard> repos;
public void setRepos(Map<String, ConfigServerOnboard> repos) {
this.repos = repos;
}
public Map<String, ConfigServerOnboard> getRepos() {
return this.repos;
}
}
The second class is as follows:
public class ConfigServerOnboard {
private Map<String, String> properties;
public Map<String, String> getProperties() {
return properties;
}
public void setProperties(Map<String, String> properties) {
this.properties = properties;
}
}
Deserialize
The deserialization strategy I tried is as follows:
public static ConfigServerProperties parseProperties(File filePath)
throws JsonParseException, JsonMappingException, IOException {
ObjectMapper mapper = new ObjectMapper(new YAMLFactory());
JsonNodeFactory jsonNodeFactory = new JsonNodeFactory(false);
jsonNodeFactory.textNode("spring.cloud.config");
// tried to use this attempting to get the prefix
mapper.setNodeFactory(jsonNodeFactory);
ConfigServerProperties user = mapper.readValue(filePath, ConfigServerProperties.class);
return user;
}
Error Returned
Exception in thread "main" com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "spring" (class com.company.platform.config.onboarding.files.config.model.ConfigServerProperties), not marked as ignorable (one known property: "repos"])
at [Source: /tmp/config-server-onboards.yml; line: 3, column: 3] (through reference chain: com.company.platform.config.onboarding.files.config.model.ConfigServerProperties["spring"])
at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62)
at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:834)
at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1094)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1470)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1448)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:282)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:140)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3798)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2740)
at com.company.platform.config.onboarding.files.config.model.ConfigServerProperties.parseProperties(ConfigServerProperties.java:37)
at com.company.platform.config.onboarding.files.config.model.ConfigServerProperties.main(ConfigServerProperties.java:42)
Edit 1: Looking for a possible SpringBoot Solution
I'm open to solutions using SpringBoot's ConfigurationProperties("spring.cloud.config.server.git"). That way, we could have the following:
#ConfigurationProperties("spring.cloud.config.server.git")
public class Configuration {
private Map<String, Map<String, String>> repos = new LinkedHashMap<String, new HashMap<String, String>>();
// getter/setter
}
Questions
How to set the root element of the document?
Deserialization must read the document and produce instances of the POJOs.
Serialization must produce the same document with updated values.
I had to come up with the following:
Create 6 classes, each of them with the property required for the prefix "spring.cloud.config.server.git"
SpringCloudConfigSpring.java
SpringCloudConfigCloud.java
SpringCloudConfigConfig.java
SpringCloudConfigServer.java
SpringCloudConfigGit.java
The holder of all of them is SpringCloudConfigFile.java.
The holder and all the classes have a reference to the next property, which has a reference to the next, etc, with their own setter/getter methods as usual.
public class SpringCloudConfigSpring {
private SpringCloudConfigCloud cloud;
public SpringCloudConfigCloud getCloud() {
return cloud;
}
public void setCloud(SpringCloudConfigCloud cloud) {
this.cloud = cloud;
}
}
Implemented the representation of the map easily.
The last one I used the reference of a TreeMap to keep the keys sorted, another map to represent any property that may be added, without changing the representation.
public class SpringCloudConfigGit {
TreeMap<String, Map<String, Object>> repos;
public TreeMap<String, Map<String, Object>> getRepos() {
return repos;
}
public void setRepos(TreeMap<String, Map<String, Object>> repos) {
this.repos = repos;
}
}
Results
Creating the verification as follows:
public static void main(String[] args) throws JsonParseException, JsonMappingException, IOException {
File config = new File("/tmp/config-server-onboards.yml");
SpringCloudConfigFile props = ConfigServerProperties.parseProperties(config);
props.getSpring().getCloud().getConfig().getServer().getGit().getRepos().forEach((appName, properties) -> {
System.out.println("################## " + appName + " #######################3");
System.out.println(properties);
if (appName.equals("github_pages_reference")) {
properties.put("name", "Marcello");
properties.put("cloneOnStart", true);
}
System.out.println("");
});
saveProperties(new File(config.getAbsoluteFile().getParentFile(), "updated-config-onboards.yml"), props);
}
The output is as follows:
################## config_onboarding #######################3
{uri=https://github.company.com/servicesplatform-tools/spring-cloud-config-onboarding-config, cloneOnStart=true, username=226b4bb85aa131cd6393acee9c484ec426111d16, password=, pullOnRequest=false}
################## config_test_server_config #######################3
{uri=https://github.company.com/rlynch2/config-test-server-config, cloneOnStart=true, username=226b4bb85aa131cd6393acee9c484ec426111d16, password=, pullOnRequest=false}
################## github_pages_reference #######################3
{uri=https://github.company.com/servicesplatform-tools/spring-cloud-config-reference-service-config, cloneOnStart=true, username=226b4bb85aa131cd6393acee9c484ec426111d16, password=, pullOnRequest=false}
There are obvious improvements required:
I'd like to have a solution with a single class;
I'd like to have a an ObjetMapper method that specifies the "subtree" of the YAML object tree that I'd like to parse.
Maybe a more sophisticated SpringBoot-like #ConfigurationProperties("spring.cloud.config.server.git") would help.
Helper methods for loading and saving the state of these instances.
Load Method
public static SpringCloudConfigFile parseProperties(File filePath)
throws JsonParseException, JsonMappingException, IOException {
ObjectMapper mapper = new ObjectMapper(new YAMLFactory());
SpringCloudConfigFile file = mapper.readValue(filePath, SpringCloudConfigFile.class);
return file;
}
Save Properties
public static void saveProperties(File filePath, SpringCloudConfigFile file) throws JsonGenerationException, JsonMappingException, IOException {
ObjectMapper mapper = new ObjectMapper(new YAMLFactory());
mapper.writeValue(filePath, file);
}
File Saved
It maintained the sorted keys as implemented.

Using map of maps as Maven plugin parameters

Is it possible to use a map of maps as a Maven plugin parameter?, e.g.
#Parameter
private Map<String, Map<String, String>> converters;
and then to use it like
<converters>
<json>
<indent>true</indent>
<strict>true</strict>
</json>
<yaml>
<stripComments>false</stripComments>
</yaml>
<converters>
If I use it like this, converters only contain the keys json and yaml with null as values.
I know it is possible to have complex objects as values, but is it also somehow possible to use maps for variable element values like in this example?
This is apparently a limitation of the sisu.plexus project internally used by the Mojo API. If you peek inside the MapConverter source, you'll find out that it first tries to fetch the value of the map by trying to interpret the configuration as a String (invoking fromExpression), and when this fails, looks up the expected type of the value. However this method doesn't check for parameterized types, which is our case here (since the type of the map value is Map<String, String>). I filed the bug 498757 on the Bugzilla of this project to track this.
Using a custom wrapper object
One workaround would be to not use a Map<String, String> as value but use a custom object:
#Parameter
private Map<String, Converter> converters;
with a class Converter, located in the same package as the Mojo, being:
public class Converter {
#Parameter
private Map<String, String> properties;
#Override
public String toString() { return properties.toString(); } // to test
}
You can then configure your Mojo with:
<converters>
<json>
<properties>
<indent>true</indent>
<strict>true</strict>
</properties>
</json>
<yaml>
<properties>
<stripComments>false</stripComments>
</properties>
</yaml>
</converters>
This configuration will correctly inject the values in the inner-maps. It also keeps the variable aspect: the object is only introduced as a wrapper around the inner-map. I tested this with a simple test mojo having
public void execute() throws MojoExecutionException, MojoFailureException {
getLog().info(converters.toString());
}
and the output was the expected {json={indent=true, strict=true}, yaml={stripComments=false}}.
Using a custom configurator
I also found a way to keep a Map<String, Map<String, String>> by using a custom ComponentConfigurator.
So we want to fix MapConverter by inhering it, the trouble is how to register this new FixedMapConverter. By default, Maven uses a BasicComponentConfigurator to configure the Mojo and it relies on a DefaultConverterLookup to look-up for converters to use for a specific class. In this case, we want to provide a custom converted for Map that will return our fixed version. Therefore, we need to extend this basic configurator and register our new converter.
import org.codehaus.plexus.classworlds.realm.ClassRealm;
import org.codehaus.plexus.component.configurator.BasicComponentConfigurator;
import org.codehaus.plexus.component.configurator.ComponentConfigurationException;
import org.codehaus.plexus.component.configurator.ConfigurationListener;
import org.codehaus.plexus.component.configurator.expression.ExpressionEvaluator;
import org.codehaus.plexus.configuration.PlexusConfiguration;
public class CustomBasicComponentConfigurator extends BasicComponentConfigurator {
#Override
public void configureComponent(final Object component, final PlexusConfiguration configuration,
final ExpressionEvaluator evaluator, final ClassRealm realm, final ConfigurationListener listener)
throws ComponentConfigurationException {
converterLookup.registerConverter(new FixedMapConverter());
super.configureComponent(component, configuration, evaluator, realm, listener);
}
}
Then we need to tell Maven to use this new configurator instead of the basic one. This is a 2-step process:
Inside your Maven plugin, create a file src/main/resources/META-INF/plexus/components.xml registering the new component:
<?xml version="1.0" encoding="UTF-8"?>
<component-set>
<components>
<component>
<role>org.codehaus.plexus.component.configurator.ComponentConfigurator</role>
<role-hint>custom-basic</role-hint>
<implementation>package.to.CustomBasicComponentConfigurator</implementation>
</component>
</components>
</component-set>
Note a few things: we declare a new component having the hint "custom-basic", this will serve as an id to refer to it and the <implementation> refers to the fully qualified class name of our configurator.
Tell our Mojo to use this configurator with the configurator attribute of the #Mojo annotation:
#Mojo(name = "test", configurator = "custom-basic")
The configurator passed here corresponds to the role-hint specified in the components.xml above.
With such a set-up, you can finally declare
#Parameter
private Map<String, Map<String, String>> converters;
and everything will be injected properly: Maven will use our custom configurator, that will register our fixed version of the map converter and will correctly convert the inner-maps.
Full code of FixedMapConverter (which pretty much copy-pastes MapConverter because we can't override the faulty method):
public class FixedMapConverter extends MapConverter {
public Object fromConfiguration(final ConverterLookup lookup, final PlexusConfiguration configuration,
final Class<?> type, final Type[] typeArguments, final Class<?> enclosingType, final ClassLoader loader,
final ExpressionEvaluator evaluator, final ConfigurationListener listener)
throws ComponentConfigurationException {
final Object value = fromExpression(configuration, evaluator, type);
if (null != value) {
return value;
}
try {
final Map<Object, Object> map = instantiateMap(configuration, type, loader);
final Class<?> elementType = findElementType(typeArguments);
if (Object.class == elementType || String.class == elementType) {
for (int i = 0, size = configuration.getChildCount(); i < size; i++) {
final PlexusConfiguration element = configuration.getChild(i);
map.put(element.getName(), fromExpression(element, evaluator));
}
return map;
}
// handle maps with complex element types...
final ConfigurationConverter converter = lookup.lookupConverterForType(elementType);
for (int i = 0, size = configuration.getChildCount(); i < size; i++) {
Object elementValue;
final PlexusConfiguration element = configuration.getChild(i);
try {
elementValue = converter.fromConfiguration(lookup, element, elementType, enclosingType, //
loader, evaluator, listener);
}
// TEMP: remove when http://jira.codehaus.org/browse/MSHADE-168
// is fixed
catch (final ComponentConfigurationException e) {
elementValue = fromExpression(element, evaluator);
Logs.warn("Map in " + enclosingType + " declares value type as: {} but saw: {} at runtime",
elementType, null != elementValue ? elementValue.getClass() : null);
}
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
map.put(element.getName(), elementValue);
}
return map;
} catch (final ComponentConfigurationException e) {
if (null == e.getFailedConfiguration()) {
e.setFailedConfiguration(configuration);
}
throw e;
}
}
#SuppressWarnings("unchecked")
private Map<Object, Object> instantiateMap(final PlexusConfiguration configuration, final Class<?> type,
final ClassLoader loader) throws ComponentConfigurationException {
final Class<?> implType = getClassForImplementationHint(type, configuration, loader);
if (null == implType || Modifier.isAbstract(implType.getModifiers())) {
return new TreeMap<Object, Object>();
}
final Object impl = instantiateObject(implType);
failIfNotTypeCompatible(impl, type, configuration);
return (Map<Object, Object>) impl;
}
private static Class<?> findElementType( final Type[] typeArguments )
{
if ( null != typeArguments && typeArguments.length > 1 )
{
if ( typeArguments[1] instanceof Class<?> )
{
return (Class<?>) typeArguments[1];
}
// begin fix here
if ( typeArguments[1] instanceof ParameterizedType )
{
return (Class<?>) ((ParameterizedType) typeArguments[1]).getRawType();
}
// end fix here
}
return Object.class;
}
}
One solution is quite simple and works for 1-level nesting. A more sophisticated approach can be found in the alternative answer which possibly also allows for deeper nesting of Maps.
Instead of using an interface as type parameter, simply use a concrete class like TreeMap
#Parameter
private Map<String, TreeMap> converters.
The reason is this check in MapConverter which fails for an interface but suceeds for a concrete class:
private static Class<?> findElementType( final Type[] typeArguments )
{
if ( null != typeArguments && typeArguments.length > 1
&& typeArguments[1] instanceof Class<?> )
{
return (Class<?>) typeArguments[1];
}
return Object.class;
}
As a side-note, an as it is also related to this answer for Maven > 3.3.x it also works to install a custom converter by subclassing BasicComponentConfigurator and using it as a Plexus component. BasicComponentConfigurator has the DefaultConverterLookup as a protected member variable and is hence easily accessible for registering custom converters.

Read Application Object from GemFire using Spring Data GemFire. Data stored using SpringXD's gemfire-json-server

I'm using the gemfire-json-server module in SpringXD to populate a GemFire grid with json representation of “Order” objects. I understand the gemfire-json-server module saves data in Pdx form in GemFire. I’d like to read the contents of the GemFire grid into an “Order” object in my application. I get a ClassCastException that reads:
java.lang.ClassCastException: com.gemstone.gemfire.pdx.internal.PdxInstanceImpl cannot be cast to org.apache.geode.demo.cc.model.Order
I’m using the Spring Data GemFire libraries to read contents of the cluster. The code snippet to read the contents of the Grid follows:
public interface OrderRepository extends GemfireRepository<Order, String>{
Order findByTransactionId(String transactionId);
}
How can I use Spring Data GemFire to convert data read from the GemFire cluster into an Order object?
Note: The data was initially stored in GemFire using SpringXD's gemfire-json-server-module
Still waiting to hear back from the GemFire PDX engineering team, specifically on Region.get(key), but, interestingly enough if you annotate your application domain object with...
#JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type")
public class Order ... {
...
}
This works!
Under-the-hood I knew the GemFire JSONFormatter class (see here) used Jackson's API to un/marshal (de/serialize) JSON data to and from PDX.
However, the orderRepository.findOne(ID) and ordersRegion.get(key) still do not function as I would expect. See updated test class below for more details.
Will report back again when I have more information.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = GemFireConfiguration.class)
#SuppressWarnings("unused")
public class JsonToPdxToObjectDataAccessIntegrationTest {
protected static final AtomicLong ID_SEQUENCE = new AtomicLong(0l);
private Order amazon;
private Order bestBuy;
private Order target;
private Order walmart;
#Autowired
private OrderRepository orderRepository;
#Resource(name = "Orders")
private com.gemstone.gemfire.cache.Region<Long, Object> orders;
protected Order createOrder(String name) {
return createOrder(ID_SEQUENCE.incrementAndGet(), name);
}
protected Order createOrder(Long id, String name) {
return new Order(id, name);
}
protected <T> T fromPdx(Object pdxInstance, Class<T> toType) {
try {
if (pdxInstance == null) {
return null;
}
else if (toType.isInstance(pdxInstance)) {
return toType.cast(pdxInstance);
}
else if (pdxInstance instanceof PdxInstance) {
return new ObjectMapper().readValue(JSONFormatter.toJSON(((PdxInstance) pdxInstance)), toType);
}
else {
throw new IllegalArgumentException(String.format("Expected object of type PdxInstance; but was (%1$s)",
pdxInstance.getClass().getName()));
}
}
catch (IOException e) {
throw new RuntimeException(String.format("Failed to convert PDX to object of type (%1$s)", toType), e);
}
}
protected void log(Object value) {
System.out.printf("Object of Type (%1$s) has Value (%2$s)", ObjectUtils.nullSafeClassName(value), value);
}
protected Order put(Order order) {
Object existingOrder = orders.putIfAbsent(order.getTransactionId(), toPdx(order));
return (existingOrder != null ? fromPdx(existingOrder, Order.class) : order);
}
protected PdxInstance toPdx(Object obj) {
try {
return JSONFormatter.fromJSON(new ObjectMapper().writeValueAsString(obj));
}
catch (JsonProcessingException e) {
throw new RuntimeException(String.format("Failed to convert object (%1$s) to JSON", obj), e);
}
}
#Before
public void setup() {
amazon = put(createOrder("Amazon Order"));
bestBuy = put(createOrder("BestBuy Order"));
target = put(createOrder("Target Order"));
walmart = put(createOrder("Wal-Mart Order"));
}
#Test
public void regionGet() {
assertThat((Order) orders.get(amazon.getTransactionId()), is(equalTo(amazon)));
}
#Test
public void repositoryFindOneMethod() {
log(orderRepository.findOne(target.getTransactionId()));
assertThat(orderRepository.findOne(target.getTransactionId()), is(equalTo(target)));
}
#Test
public void repositoryQueryMethod() {
assertThat(orderRepository.findByTransactionId(amazon.getTransactionId()), is(equalTo(amazon)));
assertThat(orderRepository.findByTransactionId(bestBuy.getTransactionId()), is(equalTo(bestBuy)));
assertThat(orderRepository.findByTransactionId(target.getTransactionId()), is(equalTo(target)));
assertThat(orderRepository.findByTransactionId(walmart.getTransactionId()), is(equalTo(walmart)));
}
#Region("Orders")
#JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type")
public static class Order implements PdxSerializable {
protected static final OrderPdxSerializer pdxSerializer = new OrderPdxSerializer();
#Id
private Long transactionId;
private String name;
public Order() {
}
public Order(Long transactionId) {
this.transactionId = transactionId;
}
public Order(Long transactionId, String name) {
this.transactionId = transactionId;
this.name = name;
}
public String getName() {
return name;
}
public void setName(final String name) {
this.name = name;
}
public Long getTransactionId() {
return transactionId;
}
public void setTransactionId(final Long transactionId) {
this.transactionId = transactionId;
}
#Override
public void fromData(PdxReader reader) {
Order order = (Order) pdxSerializer.fromData(Order.class, reader);
if (order != null) {
this.transactionId = order.getTransactionId();
this.name = order.getName();
}
}
#Override
public void toData(PdxWriter writer) {
pdxSerializer.toData(this, writer);
}
#Override
public boolean equals(Object obj) {
if (obj == this) {
return true;
}
if (!(obj instanceof Order)) {
return false;
}
Order that = (Order) obj;
return ObjectUtils.nullSafeEquals(this.getTransactionId(), that.getTransactionId());
}
#Override
public int hashCode() {
int hashValue = 17;
hashValue = 37 * hashValue + ObjectUtils.nullSafeHashCode(getTransactionId());
return hashValue;
}
#Override
public String toString() {
return String.format("{ #type = %1$s, id = %2$d, name = %3$s }",
getClass().getName(), getTransactionId(), getName());
}
}
public static class OrderPdxSerializer implements PdxSerializer {
#Override
public Object fromData(Class<?> type, PdxReader in) {
if (Order.class.equals(type)) {
return new Order(in.readLong("transactionId"), in.readString("name"));
}
return null;
}
#Override
public boolean toData(Object obj, PdxWriter out) {
if (obj instanceof Order) {
Order order = (Order) obj;
out.writeLong("transactionId", order.getTransactionId());
out.writeString("name", order.getName());
return true;
}
return false;
}
}
public interface OrderRepository extends GemfireRepository<Order, Long> {
Order findByTransactionId(Long transactionId);
}
#Configuration
protected static class GemFireConfiguration {
#Bean
public Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("name", JsonToPdxToObjectDataAccessIntegrationTest.class.getSimpleName());
gemfireProperties.setProperty("mcast-port", "0");
gemfireProperties.setProperty("log-level", "warning");
return gemfireProperties;
}
#Bean
public CacheFactoryBean gemfireCache(Properties gemfireProperties) {
CacheFactoryBean cacheFactoryBean = new CacheFactoryBean();
cacheFactoryBean.setProperties(gemfireProperties);
//cacheFactoryBean.setPdxSerializer(new MappingPdxSerializer());
cacheFactoryBean.setPdxSerializer(new OrderPdxSerializer());
cacheFactoryBean.setPdxReadSerialized(false);
return cacheFactoryBean;
}
#Bean(name = "Orders")
public PartitionedRegionFactoryBean ordersRegion(Cache gemfireCache) {
PartitionedRegionFactoryBean regionFactoryBean = new PartitionedRegionFactoryBean();
regionFactoryBean.setCache(gemfireCache);
regionFactoryBean.setName("Orders");
regionFactoryBean.setPersistent(false);
return regionFactoryBean;
}
#Bean
public GemfireRepositoryFactoryBean orderRepository() {
GemfireRepositoryFactoryBean<OrderRepository, Order, Long> repositoryFactoryBean =
new GemfireRepositoryFactoryBean<>();
repositoryFactoryBean.setRepositoryInterface(OrderRepository.class);
return repositoryFactoryBean;
}
}
}
So, as you are aware, GemFire (and by extension, Apache Geode) stores JSON in PDX format (as a PdxInstance). This is so GemFire can interoperate with many different language-based clients (native C++/C#, web-oriented (JavaScript, Pyhton, Ruby, etc) using the Developer REST API, in addition to Java) and also to be able to use OQL to query the JSON data.
After a bit of experimentation, I am surprised GemFire is not behaving as I would expect. I created an example, self-contained test class (i.e. no Spring XD, of course) that simulates your use case... essentially storing JSON data in GemFire as PDX and then attempting to read the data back out as the Order application domain object type using the Repository abstraction, logical enough.
Given the use of the Repository abstraction and implementation from Spring Data GemFire, the infrastructure will attempt to access the application domain object based on the Repository generic type parameter (in this case "Order" from the "OrderRepository" definition).
However, the data is stored in PDX, so now what?
No matter, Spring Data GemFire provides the MappingPdxSerializer class to convert PDX instances back to application domain objects using the same "mapping meta-data" that the Repository infrastructure uses. Cool, so I plug that in...
#Bean
public CacheFactoryBean gemfireCache(Properties gemfireProperties) {
CacheFactoryBean cacheFactoryBean = new CacheFactoryBean();
cacheFactoryBean.setProperties(gemfireProperties);
cacheFactoryBean.setPdxSerializer(new MappingPdxSerializer());
cacheFactoryBean.setPdxReadSerialized(false);
return cacheFactoryBean;
}
You will also notice, I set the PDX 'read-serialized' property (cacheFactoryBean.setPdxReadSerialized(false);) to false in order to ensure data access operations return the domain object and not the PDX instance.
However, this had no affect on the query method. In fact, it had no affect on the following operations either...
orderRepository.findOne(amazonOrder.getTransactionId());
ordersRegion.get(amazonOrder.getTransactionId());
Both calls returned a PdxInstance. Note, the implementation of OrderRepository.findOne(..) is based on SimpleGemfireRepository.findOne(key), which uses GemfireTemplate.get(key), which just performs Region.get(key), and so is effectively the same as (ordersRegion.get(amazonOrder.getTransactionId();). The outcome should not be, especially with Region.get() and read-serialized set to false.
With the OQL query (SELECT * FROM /Orders WHERE transactionId = $1) generated from the findByTransactionId(String id), the Repository infrastructure has a bit less control over what the GemFire query engine will return based on what the caller (OrderRepository) expects (based on the generic type parameter), so running OQL statements could potentially behave differently than direct Region access using get.
Next, I went onto try modifying the Order type to implement PdxSerializable, to handle the conversion during data access operations (direct Region access with get, OQL, or otherwise). This had no affect.
So, I tried to implement a custom PdxSerializer for Order objects. This had no affect either.
The only thing I can conclude at this point is something is getting lost in translation between Order -> JSON -> PDX and then from PDX -> Order. Seemingly, GemFire needs additional type meta-data required by PDX (something like #JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type") in the JSON data that PDXFormatter recognizes, though I am not certain it does.
Note, in my test class, I used Jackson's ObjectMapper to serialize the Order to JSON and then GemFire's JSONFormatter to serialize the JSON to PDX, which I suspect Spring XD is doing similarly under-the-hood. In fact, Spring XD uses Spring Data GemFire and is most likely using the JSON Region Auto Proxy support. That is exactly what SDG's JSONRegionAdvice object does (see here).
Anyway, I have an inquiry out to the rest of the GemFire engineering team. There are also things that could be done in Spring Data GemFire to ensure the PDX data is converted, such as making use of the MappingPdxSerializer directly to convert the data automatically on behalf of the caller if the data is indeed of type PdxInstance. Similar to how JSON Region Auto Proxying works, you could write AOP interceptor for the Orders Region to automagicaly convert PDX to an Order.
Though, I don't think any of this should be necessary as GemFire should be doing the right thing in this case. Sorry I don't have a better answer right now. Let's see what I find out.
Cheers and stay tuned!
See subsequent post for test code.

How update/remove an item already cached within a collection of items

I am working with Spring and EhCache
I have the following method
#Override
#Cacheable(value="products", key="#root.target.PRODUCTS")
public Set<Product> findAll() {
return new LinkedHashSet<>(this.productRepository.findAll());
}
I have other methods working with #Cacheable and #CachePut and #CacheEvict.
Now, imagine the database returns 100 products and they are cached through key="#root.target.PRODUCTS", then other method would insert - update - deleted an item into the database. Therefore the products cached through the key="#root.target.PRODUCTS" are not the same anymore such as the database.
I mean, check the two following two methods, they are able to update/delete an item, and that same item is cached in the other key="#root.target.PRODUCTS"
#Override
#CachePut(value="products", key="#product.id")
public Product update(Product product) {
return this.productRepository.save(product);
}
#Override
#CacheEvict(value="products", key="#id")
public void delete(Integer id) {
this.productRepository.delete(id);
}
I want to know if is possible update/delete the item located in the cache through the key="#root.target.PRODUCTS", it would be 100 with the Product updated or 499 if the Product was deleted.
My point is, I want avoid the following:
#Override
#CachePut(value="products", key="#product.id")
#CacheEvict(value="products", key="#root.target.PRODUCTS")
public Product update(Product product) {
return this.productRepository.save(product);
}
#Override
#Caching(evict={
#CacheEvict(value="products", key="#id"),
#CacheEvict(value="products", key="#root.target.PRODUCTS")
})
public void delete(Integer id) {
this.productRepository.delete(id);
}
I don't want call again the 500 or 499 products to be cached into the key="#root.target.PRODUCTS"
Is possible do this? How?
Thanks in advance.
Caching the collection using the caching abstraction is a duplicate of what the underlying caching system is doing. And because this is a duplicate, it turns out that you have to resort to some kind of duplications in your own code in one way or the other (the duplicate key for the set is the obvious representation of that). And because there is duplication, you have to sync state somehow
If you really need to access to the whole set and individual elements, then you should probably use a shortcut for the easiest leg. First, you should make sure your cache contains all elements which is not something that is obvious. Far from it actually. Considering you have that:
//EhCacheCache cache = (EhCacheCache) cacheManager.getCache("products");
#Override
public Set<Product> findAll() {
Ehcache nativeCache = cache.getNativeCache();
Map<Object, Element> elements = nativeCache.getAll(nativeCache.getKeys());
Set<Product> result = new HashSet<Product>();
for (Element element : elements.values()) {
result.add((Product) element.getObjectValue());
}
return Collections.unmodifiableSet(result);
}
The elements result is actually a lazy loaded map so a call to values() may throw an exception. You may want to loop over the keys or something.
You have to remember that the caching abstraction eases the access to the underlying caching infrastructure and in no way it replaces it: if you had to use the API directly, this is probably what you would have to do in some sort.
Now, we can keep the conversion on SPR-12036 if you believe we can improve the caching abstraction in that area. Thanks!
I think something like this schould work... Actually it's only a variation if "Stéphane Nicoll" answer ofcourse, but it may be useful for someone. I write it right here and haven't check it in IDE, but something similar works in my Project.
Override CacheResolver:
#Cacheable(value="products", key="#root.target.PRODUCTS", cacheResolver = "customCacheResolver")
Implement your own cache resolver, which search "inside" you cached items and do the work in there
public class CustomCacheResolver implements CacheResolver{
private static final String CACHE_NAME = "products";
#Autowired(required = true) private CacheManager cacheManager;
#SuppressWarnings("unchecked")
#Override
public Collection<? extends Cache> resolveCaches(CacheOperationInvocationContext<?> cacheOperationInvocationContext) {
// 1. Take key from query and create new simple key
SimpleKey newKey;
if (cacheOperationInvocationContext.getArgs().length != null) { //optional
newKey = new SimpleKey(args); //It's the key of cached object, which your "#Cachable" search for
} else {
//Schould never be... DEFAULT work with cache if something wrong with arguments
return new ArrayList<>(Arrays.asList(cacheManager.getCache(CACHE_NAME)));
}
// 2. Take cache
EhCacheCache ehCache = (EhCacheCache)cacheManager.getCache(CACHE_NAME); //this one we bringing back
Ehcache cache = (Ehcache)ehCache.getNativeCache(); //and with this we working
// 3. Modify existing Cache if we have it
if (cache.getKeys().contains(newKey) && YouWantToModifyIt) {
Element element = cache.get(key);
if (element != null && !((List<Products>)element.getObjectValue()).isEmpty()) {
List<Products> productsList = (List<Products>)element.getObjectValue();
// ---**--- Modify your "productsList" here as you want. You may now Change single element in this list.
ehCache.put(key, anfragenList); //this method NOT adds cache, but OVERWRITE existing
// 4. Maybe "Create" new cache with this key if we don't have it
} else {
ehCache.put(newKey, YOUR_ELEMENTS);
}
return new ArrayList<>(Arrays.asList(ehCache)); //Bring all back - our "new" or "modified" cache is in there now...
}
Read more about CRUD of EhCache: EhCache code samples
Hope it helps. And sorry for my English:(
I think there is a way to read the collection from underlying cache structure of spring. You can retrieve the collection from underlying ConcurrentHashMap as key-value pairs without using EhCache or anything else. Then you can update or remove an entry from that collection and then you can update the cache too. Here is an example that may help:
import com.crud.model.Post;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cache.Cache;
import org.springframework.cache.CacheManager;
import org.springframework.cache.interceptor.CacheOperationInvocationContext;
import org.springframework.cache.interceptor.CacheResolver;
import org.springframework.cache.interceptor.SimpleKey;
import org.springframework.stereotype.Component;
import java.util.*;
#Component
#Slf4j
public class CustomCacheResolver implements CacheResolver {
private static final String CACHE_NAME = "allPost";
#Autowired
private CacheManager cacheManager;
#SuppressWarnings("unchecked")
#Override
public Collection<? extends Cache> resolveCaches(CacheOperationInvocationContext<?> cacheOperationInvocationContext) {
log.info(Arrays.toString(cacheOperationInvocationContext.getArgs()));
String method = cacheOperationInvocationContext.getMethod().toString();
Post post = null;
Long postId = null;
if(method.contains("update")) {
//get the updated post
Object[] args = cacheOperationInvocationContext.getArgs();
post = (Post) args[0];
}
else if(method.contains("delete")){
//get the post Id to delete
Object[] args = cacheOperationInvocationContext.getArgs();
postId = (Long) args[0];
}
//read the cache
Cache cache = cacheManager.getCache(CACHE_NAME);
//get the concurrent cache map in key-value pair
assert cache != null;
Map<SimpleKey, List<Post>> map = (Map<SimpleKey, List<Post>>) cache.getNativeCache();
//Convert to set to iterate
Set<Map.Entry<SimpleKey, List<Post>>> entrySet = map.entrySet();
Iterator<Map.Entry<SimpleKey, List<Post>>> itr = entrySet.iterator();
//if a iterated entry is a list then it is our desired data list!!! Yayyy
Map.Entry<SimpleKey, List<Post>> entry = null;
while (itr.hasNext()){
entry = itr.next();
if(entry instanceof List) break;
}
//get the list
assert entry != null;
List<Post> postList = entry.getValue();
if(method.contains("update")) {
//update it
for (Post temp : postList) {
assert post != null;
if (temp.getId().equals(post.getId())) {
postList.remove(temp);
break;
}
}
postList.add(post);
}
else if(method.contains("delete")){
//delete it
for (Post temp : postList) {
if (temp.getId().equals(postId)) {
postList.remove(temp);
break;
}
}
}
//update the cache!! :D
cache.put(entry.getKey(),postList);
return new ArrayList<>(Collections.singletonList(cacheManager.getCache(CACHE_NAME)));
}
}
Here are the methods that uses the CustomCacheResolver
#Cacheable(key = "{#pageNo,#pageSize}")
public List<Post> retrieveAllPost(int pageNo,int pageSize){ // return list}
#CachePut(key = "#post.id",cacheResolver = "customCacheResolver")
public Boolean updatePost(Post post, UserDetails userDetails){ //your logic}
#CachePut(key = "#postId",cacheResolver = "customCacheResolver")
public Boolean deletePost(Long postId,UserDetails userDetails){ // your logic}
#CacheEvict(allEntries = true)
public Boolean createPost(String userId, Post post){//your logic}
Hope it helps to manipulate your spring application cache manually!
Though I don't see any easy way, but you can override Ehcache cache functionality by supplying cache decorator. Most probably you'd want to use EhcahceDecoratorAdapter, to enhance functions used by EhCacheCache put and evict methods.
Simple and rude solution is :
#Cacheable(key = "{#pageNo,#pageSize}")
public List<Post> retrieveAllPost(int pageNo,int pageSize){ // return list}
#CacheEvict(allEntries = true)
public Boolean updatePost(Post post, UserDetails userDetails){ //your logic}
#CacheEvict(allEntries = true)
public Boolean deletePost(Long postId,UserDetails userDetails){ // your logic}
#CacheEvict(allEntries = true)
public Boolean createPost(String userId, Post post){//your logic}

Resources