I am trying to execute a very simple query from Spring JDBCTemplate. I am retrieving one attribute from a record that is identified by primary key. The entirely of the code is shown below. When I do this with a query constructed by concatenation (dangerous and ugly, and currently uncommented) it executes in 0.1 second. When I change my comments and use the parameterized query it executes in 50 seconds. I would much prefer to get the protection that comes with the parameterized query, however 50 seconds seems like a steep price to pay. Any hints how this could be made more reasonable.
public class JdbcEventDaoImpl {
private static JdbcTemplate jtemp;
private static PreparedStatement getJsonStatement;
private static final Logger logger = LoggerFactory.getLogger(JdbcEventDaoImpl.class);
#Autowired
public void setDataSource(DataSource dataSource) {
JdbcEventDaoImpl.jtemp = new JdbcTemplate(dataSource);
}
public String getJdbcForPosting(String aggregationId){
try {
return (String) JdbcEventDaoImpl.jtemp.queryForObject("select PostingJson from PostingCollection where AggregationId = '" + aggregationId + "'", String.class);
//return (String) JdbcEventDaoImpl.jtemp.queryForObject("select PostingJson from PostingCollection where AggregationId = ?", aggregationId, String.class);
} catch (EmptyResultDataAccessException e){
return "Not Available";
}
}
}
Related
I am trying to cache Kafka Records within 3 minutes of interval post that it will get expired and removed from the cache.
Each incoming records which is fetched using kafka consumer written in springboot needs to be updated in cache first then if it is present i need to discard the next duplicate records if it matches the cache record.
I have tried using Caffeine cache as below,
#EnableCaching
public class AppCacheManagerConfig {
#Bean
public CacheManager cacheManager(Ticker ticker) {
CaffeineCache bookCache = buildCache("declineRecords", ticker, 3);
SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Collections.singletonList(bookCache));
return cacheManager;
}
private CaffeineCache buildCache(String name, Ticker ticker, int minutesToExpire) {
return new CaffeineCache(name, Caffeine.newBuilder().expireAfterWrite(minutesToExpire, TimeUnit.MINUTES)
.maximumSize(100).ticker(ticker).build());
}
#Bean
public Ticker ticker() {
return Ticker.systemTicker();
}
}
and my Kafka Consumer is as below,
#Autowired
CachingServiceImpl cachingService;
#KafkaListener(topics = "#{'${spring.kafka.consumer.topic}'}", concurrency = "#{'${spring.kafka.consumer.concurrentConsumers}'}", errorHandler = "#{'${spring.kafka.consumer.errorHandler}'}")
public void consume(Message<?> message, Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long createTime) {
logger.info("Recieved Message: " + message.getPayload());
try {
boolean approveTopic = false;
boolean duplicateRecord = false;
if (cachingService.isDuplicateCheck(declineRecord)) {
//do something with records
}
else
{
//do something with records
}
cachingService.putInCache(xmlJSONObj, declineRecord, time);
and my caching service is as below,
#Component
public class CachingServiceImpl {
private static final Logger logger = LoggerFactory.getLogger(CachingServiceImpl.class);
#Autowired
CacheManager cacheManager;
#Cacheable(value = "declineRecords", key = "#declineRecord", sync = true)
public String putInCache(JSONObject xmlJSONObj, String declineRecord, String time) {
logger.info("Record is Cached for 3 minutes interval check", declineRecord);
cacheManager.getCache("declineRecords").put(declineRecord, time);
return declineRecord;
}
public boolean isDuplicateCheck(String declineRecord) {
if (null != cacheManager.getCache("declineRecords").get(declineRecord)) {
return true;
}
return false;
}
}
But Each time a record comes in consumer my cache is always empty. Its not holding the records.
Modifications Done:
I have added Configuration file as below after going through the suggestions and more kind of R&D removed some of the earlier logic and now the caching is working as expected but duplicate check is failing when all the three consumers are sending the same records.
`
#Configuration
public class AppCacheManagerConfig {
public static Cache<String, Object> jsonCache =
Caffeine.newBuilder().expireAfterWrite(3, TimeUnit.MINUTES)
.maximumSize(10000).recordStats().build();
#Bean
public CacheLoader<Object, Object> cacheLoader() {
CacheLoader<Object, Object> cacheLoader = new CacheLoader<Object, Object>() {
#Override
public Object load(Object key) throws Exception {
return null;
}
#Override
public Object reload(Object key, Object oldValue) throws Exception {
return oldValue;
}
};
return cacheLoader;
}
`
Now i am using the above cache as manual put and get.
I guess you're trying to implement records deduplication for Kafka.
Here is the similar discussion:
https://github.com/spring-projects/spring-kafka/issues/80
Here is the current abstract class which you may extend to achieve the necessary result:
https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/listener/adapter/AbstractFilteringMessageListener.java
Your caching service is definitely incorrect: Cacheable annotation allows marking the data getters and setters, to add caching through AOP. While in the code you clearly implement some low-level cache updating logic of your own.
At least next possible changes may help you:
Remove #Cacheable. You don't need it because you work with cache manually, so it may be the source of conflicts (especially as soon as you use sync = true). If it helps, remove #EnableCaching as well - it enables support for cache-related Spring annotations which you don't need here.
Try removing Ticker bean with the appropriate parameters for other beans. It should not be harmful as per your configuration, but usually it's helpful only for tests, no need to define it otherwise.
Double-check what is declineRecord. If it's a serialized object, ensure that serialization works properly.
Add recordStats() for cache and output stats() to log for further analysis.
I have a strage(for me) question to ask. I have created synchronized Service which is called by Controller:
#Controller
public class WebAppApiController {
private final WebAppService webApService;
#Autowired
WebAppApiController(WebAppService webApService){
this.webApService= webApService;
}
#Transactional
#PreAuthorize("hasAuthority('ROLE_API')")
#PostMapping(value = "/api/webapp/{projectId}")
public ResponseEntity<Status> getWebApp(#PathVariable(value = "projectId") Long id, #RequestBody WebAppRequestModel req) {
return webApService.processWebAppRequest(id, req);
}
}
Service layer is just checking if there is no duplicate in request and store it in database. Because client which is using this endpoint is making MANY requests continously it happened that before one request was validated agnist duplicate other the same was put in database - that is why I am trying to do synchronized block.
#Service
public class WebAppService {
private final static String UUID_PATTERN_TO = "[a-zA-Z0-9]{8}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{12}";
private final WebAppRepository waRepository;
#Autowired
public WebAppService(WebAppRepository waRepository){
this.waRepository= waRepository;
}
#Transactional(rollbackOn = Exception.class)
public ResponseEntity<Status> processScanWebAppRequest(Long id, WebAppScanModel webAppScanModel){
try{
synchronized (this){
Optional<WebApp> webApp=verifyForDuplicates(webAppScanModel);
if(!webApp.isPresent()){
WebApp webApp=new WebApp(webAppScanModel.getUrl())
webApp=waRepository.save(webApp);
processpropertiesOfWebApp(webApp);
return new ResonseEntity<>(HttpStatus.CREATED);
}
return new ResonseEntity<>(HttpStatus.CONFLICT);
}
} catch (NonUniqueResultException ex){
return new ResponseEntity<>(HttpStatus.PRECONDITION_FAILED);
} catch (IncorrectResultSizeDataAccessException ex){
return new ResponseEntity<>(HttpStatus.PRECONDITION_FAILED);
}
}
}
Optional<WebApp> verifyForDuplicates(WebAppScanModel webAppScanModel){
return waRepository.getWebAppByRegex(webAppScanModel.getUrl().replaceAll(UUID_PATTERN_TO,UUID_PATTERN_TO)+"$");
}
And JPA method:
#Query(value="select * from webapp wa where wa.url ~ :url", nativeQuery = true)
Optional<WebApp> getWebAppByRegex(#Param("url") String url);
processpropertiesOfWebApp method is doing further processing for given webapp which at this point should be unique.
Intended behaviour is:
when client post request contains multiple urls like:
https://testdomain.com/user/7e1c44e4-821b-4d05-bdc3-ebd43dfeae5f
https://testdomain.com/user/d398316e-fd60-45a3-b036-6d55049b44d8
https://testdomain.com/user/c604b551-101f-44c4-9eeb-d9adca2b2fe9
Only first one will be stored within database but at this moment this is not what is happening. Select from my database:
select inserted,url from webapp where url ~ 'https://testdomain.com/users/[a-zA-Z0-9]{8}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{12}$';
2019-11-07 08:53:05 | https://testdomain.com/users/d398316e-fd60-45a3-b036-6d55049b44d8
2019-11-07 08:53:05 | https://testdomain.com/users/d398316e-fd60-45a3-b036-6d55049b44d8
2019-11-07 08:53:05 | https://testdomain.com/users/d398316e-fd60-45a3-b036-6d55049b44d8
(3 rows)
I will try to add unique constraint on url column but I can't imagine this will solve the problem while when UUID changes new url will be unique
Could anyone give me a hint what I am doing wrong?
Question is related with the one I asked before but not found proper solution, so I simplified my method but still no success
I have spent day after day trying to find a solution for my problem with Transactional methods. The logic is like this:
Controller receive request, call queueService, put it in a PriorityBlockingQueue and another thread process the data (find cards, update status,assign to current game, return data)
Controller:
#RequestMapping("/queue")
public DeferredResult<List<Card>> queueRequest(#Params...){
queueService.put(result, size, terminal, time)
result.onCompletion(() -> assignmentService.assignCards(result, game,room, cliente));
}
QueueService:
#Service
public class QueueService {
private BlockingQueue<RequestQueue> queue = new PriorityBlockingQueue<>();
#Autowired
GameRepository gameRepository;
#Autowired
TerminalRepository terminalRepository;
#Autowired
RoomRpository roomRepository;
private long requestId = 0;
public void put(DeferredResult<List<Card>> result, int size, String client, LocalDateTime time_order){
requestId++;
--ommited code(find Entity: game, terminal, room)
try {
RequestQueue request= new RequestCola(requestId, size, terminal,time_order, result);
queue.put(request);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
CardService:
#Transactional
public class CardService {
#Autowired
EntityManager em;
#Autowired
CardRepository cardRepository;
#Autowired
AsignService asignacionService;
public List<Cards> processRequest(int size, BigDecimal value)
{
List<Card> carton_query = em.createNativeQuery("{call cards_available(?,?,?)}",
Card.class)
.setParameter(1, false)
.setParameter(2, value)
.setParameter(3, size).getResultList();
List<String> ids = new ArrayList<String>();
carton_query.forEach(action -> ids.add(action.getId_card()));
String update_query = "UPDATE card SET available=true WHERE id_card IN :ids";
em.createNativeQuery(update_query).setParameter("ids", ids).executeUpdate();
return card_query;
}
QueueExecutor (Consumer)
#Component
public class QueueExecute {
#Autowired
QueueService queueRequest;
#Autowired
AsignService asignService;
#Autowired
CardService cardService;
#PostConstruct
public void init(){
new Thread(this::execute).start();
}
private void execute(){
while (true){
try {
RequestQueue request;
request = queueRequest.take();
if(request != null) {
List<Card> cards = cardService.processRequest(request.getSize(), new BigDecimal("1.0"));
request.getCards().setResult((ArrayList<Card>) cards);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
AssignService:
#Transactional
public void assignCards(DeferredResult<List<Card>> cards, Game game, Room room, Terminal terminal)
{
game = em.merge(game);
room = em.merge(room);
terminal = em.merge(terminal);
Order order = new Order();
LocalDateTime datetime = LocalDateTime.now();
BigDecimal total = new BigDecimal("0.0");
order.setTime(datetime)
order.setRoom(room);
order.setGame(game);
order.setId_terminal(terminal);
for(Card card: (List<Card>)cards.getResult()) {
card= em.merge(card)
--> System.out.println("CARD STATUS" + card.getStatus());
// This shows the OLD value of the Card (not updated)
card.setOrder(order);
order.getOrder().add(card);
}
game.setOrder(order);
//gameRepository.save(game)
}
With this code, it does not save new Card status on DB but Game, Terminal and Room saves ok on DB (more or less...). If I remove the assignService, CardService saves the new status on DB correctly.
I have tried to flush manually, save with repo and so on... but the result is almost the same. Could anybody help me?
I think I found a solution (probably not the optimum), but it's more related to the logic of my program.
One of the main problems was the update of Card status property, because it was not reflected on the entity object. When the assignOrder method is called it received the old Card value because it's not possible to share information within Threads/Transactions (as far I know). This is normal within transactions because em.executeUpdate() only commits database, so if I want to get the updated entity I need to refresh it with em.refresh(Entity), but this caused performance to go down.
At the end I changed the logic: first create Orders (transactional) and then assign cards to the orders (transactional). This way works correctly.
I want to know whether it is possible to call stored procedure using Spring Data JPA which is having resultset and multiple out parameter.
I found Git issue for same https://github.com/spring-projects/spring-data-examples/issues/80
If it is resolved, could someone provide one example with Spring Boot?
The way I've accomplished this in the past is to add custom behavior to a Spring Data JPA repository (link). Inside that I get the EntityManager and use java.sql.Connection and CallableStatement
Edit: Adding high level sample code. Sample makes the assumption that you are using Hibernate but idea should be applicable to others as well
Assuming you have an EntityRepository
public interface EntityRepositoryCustom {
Result storedProcCall(Input input);
}
public class EntityRepositoryImpl implements EntityRepositoryCustom {
#PersistenceContext
private EntityManager em;
#Override
public Result storedProcCall(Input input) {
final Result result = new Result();
Session session = getSession();
// instead of anonymous class you could move this out to a private static class that implement org.hibernate.jdbc.Work
session.doWork(new Work() {
#Override
public void execute(Connection connection) throws SQLException {
CallableStatement cs = null;
try {
cs = connection.prepareCall("{call some_stored_proc(?, ?, ?, ?)}");
cs.setString(1, "");
cs.setString(2, "");
cs.registerOutParameter(3, Types.VARCHAR);
cs.registerOutParameter(4, Types.VARCHAR);
cs.execute();
// get value from output params and set fields on return object
result.setSomeField1(cs.getString(3));
result.setSomeField2(cs.getString(4));
cs.close();
} finally {
// close cs
}
}
});
return result;
}
private Session getSession() {
// get session from entitymanager. Assuming hibernate
return em.unwrap(org.hibernate.Session.class);
}
}
I'm using the gemfire-json-server module in SpringXD to populate a GemFire grid with json representation of “Order” objects. I understand the gemfire-json-server module saves data in Pdx form in GemFire. I’d like to read the contents of the GemFire grid into an “Order” object in my application. I get a ClassCastException that reads:
java.lang.ClassCastException: com.gemstone.gemfire.pdx.internal.PdxInstanceImpl cannot be cast to org.apache.geode.demo.cc.model.Order
I’m using the Spring Data GemFire libraries to read contents of the cluster. The code snippet to read the contents of the Grid follows:
public interface OrderRepository extends GemfireRepository<Order, String>{
Order findByTransactionId(String transactionId);
}
How can I use Spring Data GemFire to convert data read from the GemFire cluster into an Order object?
Note: The data was initially stored in GemFire using SpringXD's gemfire-json-server-module
Still waiting to hear back from the GemFire PDX engineering team, specifically on Region.get(key), but, interestingly enough if you annotate your application domain object with...
#JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type")
public class Order ... {
...
}
This works!
Under-the-hood I knew the GemFire JSONFormatter class (see here) used Jackson's API to un/marshal (de/serialize) JSON data to and from PDX.
However, the orderRepository.findOne(ID) and ordersRegion.get(key) still do not function as I would expect. See updated test class below for more details.
Will report back again when I have more information.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = GemFireConfiguration.class)
#SuppressWarnings("unused")
public class JsonToPdxToObjectDataAccessIntegrationTest {
protected static final AtomicLong ID_SEQUENCE = new AtomicLong(0l);
private Order amazon;
private Order bestBuy;
private Order target;
private Order walmart;
#Autowired
private OrderRepository orderRepository;
#Resource(name = "Orders")
private com.gemstone.gemfire.cache.Region<Long, Object> orders;
protected Order createOrder(String name) {
return createOrder(ID_SEQUENCE.incrementAndGet(), name);
}
protected Order createOrder(Long id, String name) {
return new Order(id, name);
}
protected <T> T fromPdx(Object pdxInstance, Class<T> toType) {
try {
if (pdxInstance == null) {
return null;
}
else if (toType.isInstance(pdxInstance)) {
return toType.cast(pdxInstance);
}
else if (pdxInstance instanceof PdxInstance) {
return new ObjectMapper().readValue(JSONFormatter.toJSON(((PdxInstance) pdxInstance)), toType);
}
else {
throw new IllegalArgumentException(String.format("Expected object of type PdxInstance; but was (%1$s)",
pdxInstance.getClass().getName()));
}
}
catch (IOException e) {
throw new RuntimeException(String.format("Failed to convert PDX to object of type (%1$s)", toType), e);
}
}
protected void log(Object value) {
System.out.printf("Object of Type (%1$s) has Value (%2$s)", ObjectUtils.nullSafeClassName(value), value);
}
protected Order put(Order order) {
Object existingOrder = orders.putIfAbsent(order.getTransactionId(), toPdx(order));
return (existingOrder != null ? fromPdx(existingOrder, Order.class) : order);
}
protected PdxInstance toPdx(Object obj) {
try {
return JSONFormatter.fromJSON(new ObjectMapper().writeValueAsString(obj));
}
catch (JsonProcessingException e) {
throw new RuntimeException(String.format("Failed to convert object (%1$s) to JSON", obj), e);
}
}
#Before
public void setup() {
amazon = put(createOrder("Amazon Order"));
bestBuy = put(createOrder("BestBuy Order"));
target = put(createOrder("Target Order"));
walmart = put(createOrder("Wal-Mart Order"));
}
#Test
public void regionGet() {
assertThat((Order) orders.get(amazon.getTransactionId()), is(equalTo(amazon)));
}
#Test
public void repositoryFindOneMethod() {
log(orderRepository.findOne(target.getTransactionId()));
assertThat(orderRepository.findOne(target.getTransactionId()), is(equalTo(target)));
}
#Test
public void repositoryQueryMethod() {
assertThat(orderRepository.findByTransactionId(amazon.getTransactionId()), is(equalTo(amazon)));
assertThat(orderRepository.findByTransactionId(bestBuy.getTransactionId()), is(equalTo(bestBuy)));
assertThat(orderRepository.findByTransactionId(target.getTransactionId()), is(equalTo(target)));
assertThat(orderRepository.findByTransactionId(walmart.getTransactionId()), is(equalTo(walmart)));
}
#Region("Orders")
#JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type")
public static class Order implements PdxSerializable {
protected static final OrderPdxSerializer pdxSerializer = new OrderPdxSerializer();
#Id
private Long transactionId;
private String name;
public Order() {
}
public Order(Long transactionId) {
this.transactionId = transactionId;
}
public Order(Long transactionId, String name) {
this.transactionId = transactionId;
this.name = name;
}
public String getName() {
return name;
}
public void setName(final String name) {
this.name = name;
}
public Long getTransactionId() {
return transactionId;
}
public void setTransactionId(final Long transactionId) {
this.transactionId = transactionId;
}
#Override
public void fromData(PdxReader reader) {
Order order = (Order) pdxSerializer.fromData(Order.class, reader);
if (order != null) {
this.transactionId = order.getTransactionId();
this.name = order.getName();
}
}
#Override
public void toData(PdxWriter writer) {
pdxSerializer.toData(this, writer);
}
#Override
public boolean equals(Object obj) {
if (obj == this) {
return true;
}
if (!(obj instanceof Order)) {
return false;
}
Order that = (Order) obj;
return ObjectUtils.nullSafeEquals(this.getTransactionId(), that.getTransactionId());
}
#Override
public int hashCode() {
int hashValue = 17;
hashValue = 37 * hashValue + ObjectUtils.nullSafeHashCode(getTransactionId());
return hashValue;
}
#Override
public String toString() {
return String.format("{ #type = %1$s, id = %2$d, name = %3$s }",
getClass().getName(), getTransactionId(), getName());
}
}
public static class OrderPdxSerializer implements PdxSerializer {
#Override
public Object fromData(Class<?> type, PdxReader in) {
if (Order.class.equals(type)) {
return new Order(in.readLong("transactionId"), in.readString("name"));
}
return null;
}
#Override
public boolean toData(Object obj, PdxWriter out) {
if (obj instanceof Order) {
Order order = (Order) obj;
out.writeLong("transactionId", order.getTransactionId());
out.writeString("name", order.getName());
return true;
}
return false;
}
}
public interface OrderRepository extends GemfireRepository<Order, Long> {
Order findByTransactionId(Long transactionId);
}
#Configuration
protected static class GemFireConfiguration {
#Bean
public Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("name", JsonToPdxToObjectDataAccessIntegrationTest.class.getSimpleName());
gemfireProperties.setProperty("mcast-port", "0");
gemfireProperties.setProperty("log-level", "warning");
return gemfireProperties;
}
#Bean
public CacheFactoryBean gemfireCache(Properties gemfireProperties) {
CacheFactoryBean cacheFactoryBean = new CacheFactoryBean();
cacheFactoryBean.setProperties(gemfireProperties);
//cacheFactoryBean.setPdxSerializer(new MappingPdxSerializer());
cacheFactoryBean.setPdxSerializer(new OrderPdxSerializer());
cacheFactoryBean.setPdxReadSerialized(false);
return cacheFactoryBean;
}
#Bean(name = "Orders")
public PartitionedRegionFactoryBean ordersRegion(Cache gemfireCache) {
PartitionedRegionFactoryBean regionFactoryBean = new PartitionedRegionFactoryBean();
regionFactoryBean.setCache(gemfireCache);
regionFactoryBean.setName("Orders");
regionFactoryBean.setPersistent(false);
return regionFactoryBean;
}
#Bean
public GemfireRepositoryFactoryBean orderRepository() {
GemfireRepositoryFactoryBean<OrderRepository, Order, Long> repositoryFactoryBean =
new GemfireRepositoryFactoryBean<>();
repositoryFactoryBean.setRepositoryInterface(OrderRepository.class);
return repositoryFactoryBean;
}
}
}
So, as you are aware, GemFire (and by extension, Apache Geode) stores JSON in PDX format (as a PdxInstance). This is so GemFire can interoperate with many different language-based clients (native C++/C#, web-oriented (JavaScript, Pyhton, Ruby, etc) using the Developer REST API, in addition to Java) and also to be able to use OQL to query the JSON data.
After a bit of experimentation, I am surprised GemFire is not behaving as I would expect. I created an example, self-contained test class (i.e. no Spring XD, of course) that simulates your use case... essentially storing JSON data in GemFire as PDX and then attempting to read the data back out as the Order application domain object type using the Repository abstraction, logical enough.
Given the use of the Repository abstraction and implementation from Spring Data GemFire, the infrastructure will attempt to access the application domain object based on the Repository generic type parameter (in this case "Order" from the "OrderRepository" definition).
However, the data is stored in PDX, so now what?
No matter, Spring Data GemFire provides the MappingPdxSerializer class to convert PDX instances back to application domain objects using the same "mapping meta-data" that the Repository infrastructure uses. Cool, so I plug that in...
#Bean
public CacheFactoryBean gemfireCache(Properties gemfireProperties) {
CacheFactoryBean cacheFactoryBean = new CacheFactoryBean();
cacheFactoryBean.setProperties(gemfireProperties);
cacheFactoryBean.setPdxSerializer(new MappingPdxSerializer());
cacheFactoryBean.setPdxReadSerialized(false);
return cacheFactoryBean;
}
You will also notice, I set the PDX 'read-serialized' property (cacheFactoryBean.setPdxReadSerialized(false);) to false in order to ensure data access operations return the domain object and not the PDX instance.
However, this had no affect on the query method. In fact, it had no affect on the following operations either...
orderRepository.findOne(amazonOrder.getTransactionId());
ordersRegion.get(amazonOrder.getTransactionId());
Both calls returned a PdxInstance. Note, the implementation of OrderRepository.findOne(..) is based on SimpleGemfireRepository.findOne(key), which uses GemfireTemplate.get(key), which just performs Region.get(key), and so is effectively the same as (ordersRegion.get(amazonOrder.getTransactionId();). The outcome should not be, especially with Region.get() and read-serialized set to false.
With the OQL query (SELECT * FROM /Orders WHERE transactionId = $1) generated from the findByTransactionId(String id), the Repository infrastructure has a bit less control over what the GemFire query engine will return based on what the caller (OrderRepository) expects (based on the generic type parameter), so running OQL statements could potentially behave differently than direct Region access using get.
Next, I went onto try modifying the Order type to implement PdxSerializable, to handle the conversion during data access operations (direct Region access with get, OQL, or otherwise). This had no affect.
So, I tried to implement a custom PdxSerializer for Order objects. This had no affect either.
The only thing I can conclude at this point is something is getting lost in translation between Order -> JSON -> PDX and then from PDX -> Order. Seemingly, GemFire needs additional type meta-data required by PDX (something like #JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type") in the JSON data that PDXFormatter recognizes, though I am not certain it does.
Note, in my test class, I used Jackson's ObjectMapper to serialize the Order to JSON and then GemFire's JSONFormatter to serialize the JSON to PDX, which I suspect Spring XD is doing similarly under-the-hood. In fact, Spring XD uses Spring Data GemFire and is most likely using the JSON Region Auto Proxy support. That is exactly what SDG's JSONRegionAdvice object does (see here).
Anyway, I have an inquiry out to the rest of the GemFire engineering team. There are also things that could be done in Spring Data GemFire to ensure the PDX data is converted, such as making use of the MappingPdxSerializer directly to convert the data automatically on behalf of the caller if the data is indeed of type PdxInstance. Similar to how JSON Region Auto Proxying works, you could write AOP interceptor for the Orders Region to automagicaly convert PDX to an Order.
Though, I don't think any of this should be necessary as GemFire should be doing the right thing in this case. Sorry I don't have a better answer right now. Let's see what I find out.
Cheers and stay tuned!
See subsequent post for test code.