Code:
public static class Oya {
String name;
public Oya(String name) {
super();
this.name = name;
}
/* (non-Javadoc)
* #see java.lang.Object#toString()
*/
#Override
public String toString() {
return "Oya [name=" + name + "]";
}
}
public static void main(String[] args) throws GridException {
try (Grid grid = GridGain.start(
System.getProperty("user.home") + "/gridgain-platform-os-6.1.9-nix/examples/config/example-cache.xml")) {
GridCache<Integer, Oya> cache = grid.cache("partitioned");
boolean success2 = cache.putxIfAbsent(3, new Oya("3"));
log.info("Current 3 value = {}", cache.get(3));
cache.transform(3, (it) -> new Oya(it.name + "-transformed"));
log.info("Transformed 3 value = {}", cache.get(3));
}
}
Start another GridGain node.
Run the code. It should print: 3-transformed
Comment the putxIfAbsent() code.
Run the code. I expected it to print: 3-transformed but got null instead
The code will work if I change the cache value to a String (like in GridGain Basic Operations video) or a Java builtin value, but not for my own custom class.
Peer-deployment for data grid is a development-only feature. The contract of a SHARED mode is that whenever last node which had original class definition leaves, all classes will be undeployed. For Data Grid it means that the cache will be cleared. This is useful for cases when you change class definitions.
In CONTINUOUS mode, cache classes never get undeployed, but in this case you must be careful not to change definitions of classes without restarting the grid nodes.
For more information see Deployment Modes documentation.
Related
I own a spring application and want to add camel routes dynamically during my application startup.End points are configured in property file and are loaded at run time.
Using Java DSL, i am using for loop to create all routes,
for(int i=0;i<allEndPoints;i++)
{
DynamcRouteBuilder route = new
DynamcRouteBuilder(context,fromUri,toUri)
camelContext.addRoutes(route)
}
private class DynamcRouteBuilder extends RouteBuilder {
private final String from;
private final String to;
private MyDynamcRouteBuilder(CamelContext context, String from, String to) {
super(context);
this.from = from;
this.to = to;
}
#Override
public void configure() throws Exception {
from(from).to(to);
}
}
but getting below exception while creating first route itself
Failed to create route file_routedirect: at: >>> OnException[[class org.apache.camel.component.file.GenericFileOperationFailedException] -> [Log[Exception trapped ${exception.class}], process[Processor#0x0]]] <<< in route: Route(file_routedirect:)[[From[direct:... because of ref must be specified on: process[Processor#0x0]\n\ta
Not sure about it- what is the issue ? Can someone has any suggestion or fix for this. Thanks
Well, to create routes in an iteration it is nice to have some object that holds the different values for one route. Let's call this RouteConfiguration, a simple POJO with String fields for from, to and routeId.
We are using YAML files to configure such things because you have a real List format instead of using "flat lists" in property files (route[0].from, route[0].to).
If you use Spring you can directly transform such a "list of object configurations" into a Collection of objects using #ConfigurationProperties
When you are able to create such a Collection of value objects, you can simply iterate over it. Here is a strongly simplified example.
#Override
public void configure() {
createConfiguredRoutes();
}
void createConfiguredRoutes() {
configuration.getRoutes().forEach(this::addRouteToContext);
}
// Implement route that is added in an iteration
private void addRouteToContext(final RouteConfiguration routeConfiguration) throws Exception {
this.camelContext.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from(routeConfiguration.getFrom())
.routeId(routeConfiguration.getRouteId())
...
.to(routeConfiguration.getTo());
}
});
}
I am trying to write a few SONARQUBE custom rules for my project.
After reading up the below document -
https://docs.sonarqube.org/display/PLUG/Writing+Custom+Java+Rules+101
and
https://github.com/SonarSource/sonar-custom-rules-examples,
I created a custom rule like these classes below -
The Rule file:
#Rule(key = "MyAssertionRule")
public class FirstSonarCustomRule extends BaseTreeVisitor implements JavaFileScanner {
private static final String DEFAULT_VALUE = "Inject";
private JavaFileScannerContext context;
/**
* Name of the annotation to avoid. Value can be set by users in Quality
* profiles. The key
*/
#RuleProperty(defaultValue = DEFAULT_VALUE, description = "Name of the annotation to avoid, without the prefix #, for instance 'Override'")
protected String name;
#Override
public void scanFile(JavaFileScannerContext context) {
this.context = context;
System.out.println(PrinterVisitor.print(context.getTree()));
scan(context.getTree());
}
#Override
public void visitMethod(MethodTree tree) {
List<StatementTree> statements = tree.block().body();
for (StatementTree statement : statements) {
System.out.println("KIND IS " + statement.kind());
if (statement.is(Kind.EXPRESSION_STATEMENT)) {
if (statement.firstToken().text().equals("Assert")) {
System.out.println("ERROR");
}
}
}
}
}
The Test class:
public class FirstSonarCustomRuleTest {
#Test
public void verify() {
FirstSonarCustomRule f = new FirstSonarCustomRule();
f.name = "ASSERTION";
JavaCheckVerifier.verify("src/test/files/FirstSonarCustom.java", f);
}
}
And finally - the Test file:
class FirstSonarCustom {
int aField;
public void methodToUseTestNgAssertions() {
Assert.assertTrue(true);
}
}
The above Test file would later be my Project's source code.
As per the SONAR documentation - the // Noncompliant is a mandatory comment in my Test file. Thus my first question is should I add this comment everywhere in my Source code too?
If yes - is there any way I can avoid adding this comment, because I do not want to add that code refactoring exercise all over.
Can someone suggest me what I need to do here?
I am using SONARQUBE 6.3.
This comment is only used by the test framework (JavaCheckVerifier class) to test the implementation of your rule. It is not mandatory in any way and for sure you don't need it in your real code.
I have been working on a java application which crawls page from Internet with http-client(version4.3.3). It uses one fixedThreadPool with 5 threads,each is a loop thread .The pseudocode is following.
public class Spiderling extends Runnable{
#Override
public void run() {
while (true) {
T task = null;
try {
task = scheduler.poll();
if (task != null) {
if Ehcache contains task's config
taskConfig = Ehcache.getConfig;
else{
taskConfig = Query task config from db;//close the conn every time
put taskConfig into Ehcache
}
spider(task,taskConfig);
}
} catch (Exception e) {
e.printStackTrace();
}
}
LOG.error("spiderling is DEAD");
}
}
I am running it with following arguments -Duser.timezone=GMT+8 -server -Xms1536m -Xmx1536m -Xloggc:/home/datalord/logs/gc-2016-07-23-10-28-24.log -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC on a server(2 cpus,2G memory) and it crashes pretty regular about once in two or three days with no OutOfMemoryError and no JVM error log.
Here is my analysis;
I analyse the gc log with GC-EASY,the report is here. The weird thing is the Old Gen increasing slowly until the allocated max heap size,but the Full Gc has never happened even once.
I suspect it might has memory leak,so I dump the heap map using cmd jmap -dump:format=b,file=soldier.bin and using the Eclipse MAT to analyze the dump file.Here is the problem suspect which object occupies 280+ M bytes.
The class "com.mysql.jdbc.NonRegisteringDriver",
loaded by "sun.misc.Launcher$AppClassLoader # 0xa0018490", occupies 281,118,144
(68.91%) bytes. The memory is accumulated in one instance of
"java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "".
Keywords
com.mysql.jdbc.NonRegisteringDriver
java.util.concurrent.ConcurrentHashMap$Segment[]
sun.misc.Launcher$AppClassLoader # 0xa0018490.
I use c3p0-0.9.1.2 as mysql connection pool and mysql-connector-java-5.1.34 as jdbc connector and Ehcache-2.6.10 as memory cache.I have see all posts about 'com.mysql.jdbc.NonregisteringDriver memory leak' and still get no clue.
This problem has driven me crazy for several days, any advice or help will be appreciated!
**********************Supplementary description on 07-24****************
I use a JAVA WEB + ORM Framework called JFinal(github.com/jfinal/jfinal) which is open in github。
Here are some core code for further description about the problem.
/**
* CacheKit. Useful tool box for EhCache.
*
*/
public class CacheKit {
private static CacheManager cacheManager;
private static final Logger log = Logger.getLogger(CacheKit.class);
static void init(CacheManager cacheManager) {
CacheKit.cacheManager = cacheManager;
}
public static CacheManager getCacheManager() {
return cacheManager;
}
static Cache getOrAddCache(String cacheName) {
Cache cache = cacheManager.getCache(cacheName);
if (cache == null) {
synchronized(cacheManager) {
cache = cacheManager.getCache(cacheName);
if (cache == null) {
log.warn("Could not find cache config [" + cacheName + "], using default.");
cacheManager.addCacheIfAbsent(cacheName);
cache = cacheManager.getCache(cacheName);
log.debug("Cache [" + cacheName + "] started.");
}
}
}
return cache;
}
public static void put(String cacheName, Object key, Object value) {
getOrAddCache(cacheName).put(new Element(key, value));
}
#SuppressWarnings("unchecked")
public static <T> T get(String cacheName, Object key) {
Element element = getOrAddCache(cacheName).get(key);
return element != null ? (T)element.getObjectValue() : null;
}
#SuppressWarnings("rawtypes")
public static List getKeys(String cacheName) {
return getOrAddCache(cacheName).getKeys();
}
public static void remove(String cacheName, Object key) {
getOrAddCache(cacheName).remove(key);
}
public static void removeAll(String cacheName) {
getOrAddCache(cacheName).removeAll();
}
#SuppressWarnings("unchecked")
public static <T> T get(String cacheName, Object key, IDataLoader dataLoader) {
Object data = get(cacheName, key);
if (data == null) {
data = dataLoader.load();
put(cacheName, key, data);
}
return (T)data;
}
#SuppressWarnings("unchecked")
public static <T> T get(String cacheName, Object key, Class<? extends IDataLoader> dataLoaderClass) {
Object data = get(cacheName, key);
if (data == null) {
try {
IDataLoader dataLoader = dataLoaderClass.newInstance();
data = dataLoader.load();
put(cacheName, key, data);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
return (T)data;
}
}
I use CacheKit like CacheKit.get("cfg_extract_rule_tree", extractRootId, new ExtractRuleTreeDataloader(extractRootId)). and class ExtractRuleTreeDataloader will be called if find nothing in cache by extractRootId.
public class ExtractRuleTreeDataloader implements IDataLoader {
public static final Logger LOG = LoggerFactory.getLogger(ExtractRuleTreeDataloader.class);
private int ruleTreeId;
public ExtractRuleTreeDataloader(int ruleTreeId) {
super();
this.ruleTreeId = ruleTreeId;
}
#Override
public Object load() {
List<Record> ruleTreeList = Db.find("SELECT * FROM cfg_extract_fule WHERE root_id=?", ruleTreeId);
TreeHelper<ExtractRuleNode> treeHelper = ExtractUtil.batchRecordConvertTree(ruleTreeList);//convert List<Record> to and tree
if (treeHelper.isValidTree()) {
return treeHelper.getRoot();
} else {
LOG.warn("rule tree id :{} is an error tree #end#", ruleTreeId);
return null;
}
}
As I said before, I use JFinal ORM.The Db.find method code is
public List<Record> find(String sql, Object... paras) {
Connection conn = null;
try {
conn = config.getConnection();
return find(config, conn, sql, paras);
} catch (Exception e) {
throw new ActiveRecordException(e);
} finally {
config.close(conn);
}
}
and the config close method code is
public final void close(Connection conn) {
if (threadLocal.get() == null) // in transaction if conn in threadlocal
if (conn != null)
try {conn.close();} catch (SQLException e) {throw new ActiveRecordException(e);}
}
There is no transaction in my code,so I am pretty sure the conn.close() will be called every time.
**********************more description on 07-28****************
First, I use Ehcache to store the taskConfigs in the memory. And the taskConfigs almost never change, so I want store them in the memory eternally and store them to disk if the memory can not store them all.
I use MAT to find out the GC Roots of NonRegisteringDriver, and the result is show in the following picture.
The Gc Roots of NonRegisteringDriver
But I still don't understand why the default behavior of Ehcache lead memory leak.The taskConfig is a class extends the Model class.
public class TaskConfig extends Model<TaskConfig> {
private static final long serialVersionUID = 5000070716569861947L;
public static TaskConfig DAO = new TaskConfig();
}
and the source code of Model is in this page(github.com/jfinal/jfinal/blob/jfinal-2.0/src/com/jfinal/plugin/activerecord/Model.java). And I can't find any reference (either directly or indirectly) to the connection object as #Jeremiah guessing.
Then I read the source code of NonRegisteringDriver, and don't understand why the map field connectionPhantomRefs of NonRegisteringDriver holds more than 5000 entrys of <ConnectionPhantomReference, ConnectionPhantomReference>,but find no ConnectionImpl in the queue field refQueue of NonRegisteringDriver. Because I see the cleanup code in class AbandonedConnectionCleanupThread which means it will move the ref in the NonRegisteringDriver.connectionPhantomRefs while getting abandoned connection ref from NonRegisteringDriver.refQueue.
#Override
public void run() {
threadRef = this;
while (running) {
try {
Reference<? extends ConnectionImpl> ref = NonRegisteringDriver.refQueue.remove(100);
if (ref != null) {
try {
((ConnectionPhantomReference) ref).cleanup();
} finally {
NonRegisteringDriver.connectionPhantomRefs.remove(ref);
}
}
} catch (Exception ex) {
// no where to really log this if we're static
}
}
}
Appreciate the help offered by #Jeremiah !
From the comments above I'm almost certain your memory leak is actually memory usage from EhCache. The ConcurrentHashMap you're seeing is the one backing the MemoryStore, and I'm guessing that the taskConfig holds a reference (either directly or indirectly) to the connection object, which is why it's showing in your stack.
Having eternal="true" in the default cache makes it so the inserted objects are never allowed to expire. Even without that, the timeToLive and timeToIdle values default to an infinite lifetime!
Combine that with the default behavior of Ehcache when retrieving elements is to copy them (last I checked), through serialization! You're just stacking new Object references up each time the taskConfig is extracted and put back into the ehcache.
The best way to test this (in my opinion) is to change your default cache configuration. Change eternal to false, and implement a timeToIdle value. timeToIdle is a time (in seconds) that a value may exist in the cache without being accessed.
<ehcache> <diskStore path="java.io.tmpdir"/> <defaultCache maxElementsInMemory="10000" eternal="false" timeToIdle="120" overflowToDisk="true" diskPersistent="false" diskExpiryThreadIntervalSeconds="120"/>
If that works, then you may want to look into further tweaking your ehcache configuration settings, or providing a more customized cache reference other than default for your class.
There are multiple performance considerations when tweaking the ehcache. I'm sure that there is a better configuration for your business model. The Ehcache documentation is good, but I found the site to be a bit scattered when I was trying to figure it out. I've listed some links that I found useful below.
http://www.ehcache.org/documentation/2.8/configuration/cache-size.html
http://www.ehcache.org/documentation/2.8/configuration/configuration.html
http://www.ehcache.org/documentation/2.8/apis/cache-eviction-algorithms.html#provided-memorystore-eviction-algorithms
Good luck!
To test your memory leak try the following:
Insert a TaskConfig into ehcache
Immediately retrieve it back out of the cache.
output the value of TaskConfig1.equals(TaskConfig2);
If it returns false, that is your memory leak. Override equals and
hash in your TaskConfig Object and rerun the test.
The root cause of the java program is that the Linux OS runs out of memory and the OOM Killer kills the progresses.
I found the log in /var/log/messages like following.
Aug 3 07:24:03 iZ233tupyzzZ kernel: Out of memory: Kill process 17308 (java) score 890 or sacrifice child
Aug 3 07:24:03 iZ233tupyzzZ kernel: Killed process 17308, UID 0, (java) total-vm:2925160kB, anon-rss:1764648kB, file-rss:248kB
Aug 3 07:24:03 iZ233tupyzzZ kernel: Thread (pooled) invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Aug 3 07:24:03 iZ233tupyzzZ kernel: Thread (pooled) cpuset=/ mems_allowed=0
Aug 3 07:24:03 iZ233tupyzzZ kernel: Pid: 6721, comm: Thread (pooled) Not tainted 2.6.32-431.23.3.el6.x86_64 #1
I also find the default value of maxIdleTime is 20 seconds in the C3p0Plugin which is a c3p0 plugin in JFinal, So I think this is why the Object NonRegisteringDriver occupies 280+ M bytes that shown in the MAT report. So I set the maxIdleTime to 3600 seconds and the object NonRegisteringDriver is no longer suspicious in the MAT report.
And I reset the jvm argements to -Xms512m -Xmx512m. And the java program already has been running pretty well for several days. The Full Gc will be called as expected when the Old Gen is full.
I'm using the gemfire-json-server module in SpringXD to populate a GemFire grid with json representation of “Order” objects. I understand the gemfire-json-server module saves data in Pdx form in GemFire. I’d like to read the contents of the GemFire grid into an “Order” object in my application. I get a ClassCastException that reads:
java.lang.ClassCastException: com.gemstone.gemfire.pdx.internal.PdxInstanceImpl cannot be cast to org.apache.geode.demo.cc.model.Order
I’m using the Spring Data GemFire libraries to read contents of the cluster. The code snippet to read the contents of the Grid follows:
public interface OrderRepository extends GemfireRepository<Order, String>{
Order findByTransactionId(String transactionId);
}
How can I use Spring Data GemFire to convert data read from the GemFire cluster into an Order object?
Note: The data was initially stored in GemFire using SpringXD's gemfire-json-server-module
Still waiting to hear back from the GemFire PDX engineering team, specifically on Region.get(key), but, interestingly enough if you annotate your application domain object with...
#JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type")
public class Order ... {
...
}
This works!
Under-the-hood I knew the GemFire JSONFormatter class (see here) used Jackson's API to un/marshal (de/serialize) JSON data to and from PDX.
However, the orderRepository.findOne(ID) and ordersRegion.get(key) still do not function as I would expect. See updated test class below for more details.
Will report back again when I have more information.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = GemFireConfiguration.class)
#SuppressWarnings("unused")
public class JsonToPdxToObjectDataAccessIntegrationTest {
protected static final AtomicLong ID_SEQUENCE = new AtomicLong(0l);
private Order amazon;
private Order bestBuy;
private Order target;
private Order walmart;
#Autowired
private OrderRepository orderRepository;
#Resource(name = "Orders")
private com.gemstone.gemfire.cache.Region<Long, Object> orders;
protected Order createOrder(String name) {
return createOrder(ID_SEQUENCE.incrementAndGet(), name);
}
protected Order createOrder(Long id, String name) {
return new Order(id, name);
}
protected <T> T fromPdx(Object pdxInstance, Class<T> toType) {
try {
if (pdxInstance == null) {
return null;
}
else if (toType.isInstance(pdxInstance)) {
return toType.cast(pdxInstance);
}
else if (pdxInstance instanceof PdxInstance) {
return new ObjectMapper().readValue(JSONFormatter.toJSON(((PdxInstance) pdxInstance)), toType);
}
else {
throw new IllegalArgumentException(String.format("Expected object of type PdxInstance; but was (%1$s)",
pdxInstance.getClass().getName()));
}
}
catch (IOException e) {
throw new RuntimeException(String.format("Failed to convert PDX to object of type (%1$s)", toType), e);
}
}
protected void log(Object value) {
System.out.printf("Object of Type (%1$s) has Value (%2$s)", ObjectUtils.nullSafeClassName(value), value);
}
protected Order put(Order order) {
Object existingOrder = orders.putIfAbsent(order.getTransactionId(), toPdx(order));
return (existingOrder != null ? fromPdx(existingOrder, Order.class) : order);
}
protected PdxInstance toPdx(Object obj) {
try {
return JSONFormatter.fromJSON(new ObjectMapper().writeValueAsString(obj));
}
catch (JsonProcessingException e) {
throw new RuntimeException(String.format("Failed to convert object (%1$s) to JSON", obj), e);
}
}
#Before
public void setup() {
amazon = put(createOrder("Amazon Order"));
bestBuy = put(createOrder("BestBuy Order"));
target = put(createOrder("Target Order"));
walmart = put(createOrder("Wal-Mart Order"));
}
#Test
public void regionGet() {
assertThat((Order) orders.get(amazon.getTransactionId()), is(equalTo(amazon)));
}
#Test
public void repositoryFindOneMethod() {
log(orderRepository.findOne(target.getTransactionId()));
assertThat(orderRepository.findOne(target.getTransactionId()), is(equalTo(target)));
}
#Test
public void repositoryQueryMethod() {
assertThat(orderRepository.findByTransactionId(amazon.getTransactionId()), is(equalTo(amazon)));
assertThat(orderRepository.findByTransactionId(bestBuy.getTransactionId()), is(equalTo(bestBuy)));
assertThat(orderRepository.findByTransactionId(target.getTransactionId()), is(equalTo(target)));
assertThat(orderRepository.findByTransactionId(walmart.getTransactionId()), is(equalTo(walmart)));
}
#Region("Orders")
#JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type")
public static class Order implements PdxSerializable {
protected static final OrderPdxSerializer pdxSerializer = new OrderPdxSerializer();
#Id
private Long transactionId;
private String name;
public Order() {
}
public Order(Long transactionId) {
this.transactionId = transactionId;
}
public Order(Long transactionId, String name) {
this.transactionId = transactionId;
this.name = name;
}
public String getName() {
return name;
}
public void setName(final String name) {
this.name = name;
}
public Long getTransactionId() {
return transactionId;
}
public void setTransactionId(final Long transactionId) {
this.transactionId = transactionId;
}
#Override
public void fromData(PdxReader reader) {
Order order = (Order) pdxSerializer.fromData(Order.class, reader);
if (order != null) {
this.transactionId = order.getTransactionId();
this.name = order.getName();
}
}
#Override
public void toData(PdxWriter writer) {
pdxSerializer.toData(this, writer);
}
#Override
public boolean equals(Object obj) {
if (obj == this) {
return true;
}
if (!(obj instanceof Order)) {
return false;
}
Order that = (Order) obj;
return ObjectUtils.nullSafeEquals(this.getTransactionId(), that.getTransactionId());
}
#Override
public int hashCode() {
int hashValue = 17;
hashValue = 37 * hashValue + ObjectUtils.nullSafeHashCode(getTransactionId());
return hashValue;
}
#Override
public String toString() {
return String.format("{ #type = %1$s, id = %2$d, name = %3$s }",
getClass().getName(), getTransactionId(), getName());
}
}
public static class OrderPdxSerializer implements PdxSerializer {
#Override
public Object fromData(Class<?> type, PdxReader in) {
if (Order.class.equals(type)) {
return new Order(in.readLong("transactionId"), in.readString("name"));
}
return null;
}
#Override
public boolean toData(Object obj, PdxWriter out) {
if (obj instanceof Order) {
Order order = (Order) obj;
out.writeLong("transactionId", order.getTransactionId());
out.writeString("name", order.getName());
return true;
}
return false;
}
}
public interface OrderRepository extends GemfireRepository<Order, Long> {
Order findByTransactionId(Long transactionId);
}
#Configuration
protected static class GemFireConfiguration {
#Bean
public Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("name", JsonToPdxToObjectDataAccessIntegrationTest.class.getSimpleName());
gemfireProperties.setProperty("mcast-port", "0");
gemfireProperties.setProperty("log-level", "warning");
return gemfireProperties;
}
#Bean
public CacheFactoryBean gemfireCache(Properties gemfireProperties) {
CacheFactoryBean cacheFactoryBean = new CacheFactoryBean();
cacheFactoryBean.setProperties(gemfireProperties);
//cacheFactoryBean.setPdxSerializer(new MappingPdxSerializer());
cacheFactoryBean.setPdxSerializer(new OrderPdxSerializer());
cacheFactoryBean.setPdxReadSerialized(false);
return cacheFactoryBean;
}
#Bean(name = "Orders")
public PartitionedRegionFactoryBean ordersRegion(Cache gemfireCache) {
PartitionedRegionFactoryBean regionFactoryBean = new PartitionedRegionFactoryBean();
regionFactoryBean.setCache(gemfireCache);
regionFactoryBean.setName("Orders");
regionFactoryBean.setPersistent(false);
return regionFactoryBean;
}
#Bean
public GemfireRepositoryFactoryBean orderRepository() {
GemfireRepositoryFactoryBean<OrderRepository, Order, Long> repositoryFactoryBean =
new GemfireRepositoryFactoryBean<>();
repositoryFactoryBean.setRepositoryInterface(OrderRepository.class);
return repositoryFactoryBean;
}
}
}
So, as you are aware, GemFire (and by extension, Apache Geode) stores JSON in PDX format (as a PdxInstance). This is so GemFire can interoperate with many different language-based clients (native C++/C#, web-oriented (JavaScript, Pyhton, Ruby, etc) using the Developer REST API, in addition to Java) and also to be able to use OQL to query the JSON data.
After a bit of experimentation, I am surprised GemFire is not behaving as I would expect. I created an example, self-contained test class (i.e. no Spring XD, of course) that simulates your use case... essentially storing JSON data in GemFire as PDX and then attempting to read the data back out as the Order application domain object type using the Repository abstraction, logical enough.
Given the use of the Repository abstraction and implementation from Spring Data GemFire, the infrastructure will attempt to access the application domain object based on the Repository generic type parameter (in this case "Order" from the "OrderRepository" definition).
However, the data is stored in PDX, so now what?
No matter, Spring Data GemFire provides the MappingPdxSerializer class to convert PDX instances back to application domain objects using the same "mapping meta-data" that the Repository infrastructure uses. Cool, so I plug that in...
#Bean
public CacheFactoryBean gemfireCache(Properties gemfireProperties) {
CacheFactoryBean cacheFactoryBean = new CacheFactoryBean();
cacheFactoryBean.setProperties(gemfireProperties);
cacheFactoryBean.setPdxSerializer(new MappingPdxSerializer());
cacheFactoryBean.setPdxReadSerialized(false);
return cacheFactoryBean;
}
You will also notice, I set the PDX 'read-serialized' property (cacheFactoryBean.setPdxReadSerialized(false);) to false in order to ensure data access operations return the domain object and not the PDX instance.
However, this had no affect on the query method. In fact, it had no affect on the following operations either...
orderRepository.findOne(amazonOrder.getTransactionId());
ordersRegion.get(amazonOrder.getTransactionId());
Both calls returned a PdxInstance. Note, the implementation of OrderRepository.findOne(..) is based on SimpleGemfireRepository.findOne(key), which uses GemfireTemplate.get(key), which just performs Region.get(key), and so is effectively the same as (ordersRegion.get(amazonOrder.getTransactionId();). The outcome should not be, especially with Region.get() and read-serialized set to false.
With the OQL query (SELECT * FROM /Orders WHERE transactionId = $1) generated from the findByTransactionId(String id), the Repository infrastructure has a bit less control over what the GemFire query engine will return based on what the caller (OrderRepository) expects (based on the generic type parameter), so running OQL statements could potentially behave differently than direct Region access using get.
Next, I went onto try modifying the Order type to implement PdxSerializable, to handle the conversion during data access operations (direct Region access with get, OQL, or otherwise). This had no affect.
So, I tried to implement a custom PdxSerializer for Order objects. This had no affect either.
The only thing I can conclude at this point is something is getting lost in translation between Order -> JSON -> PDX and then from PDX -> Order. Seemingly, GemFire needs additional type meta-data required by PDX (something like #JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type") in the JSON data that PDXFormatter recognizes, though I am not certain it does.
Note, in my test class, I used Jackson's ObjectMapper to serialize the Order to JSON and then GemFire's JSONFormatter to serialize the JSON to PDX, which I suspect Spring XD is doing similarly under-the-hood. In fact, Spring XD uses Spring Data GemFire and is most likely using the JSON Region Auto Proxy support. That is exactly what SDG's JSONRegionAdvice object does (see here).
Anyway, I have an inquiry out to the rest of the GemFire engineering team. There are also things that could be done in Spring Data GemFire to ensure the PDX data is converted, such as making use of the MappingPdxSerializer directly to convert the data automatically on behalf of the caller if the data is indeed of type PdxInstance. Similar to how JSON Region Auto Proxying works, you could write AOP interceptor for the Orders Region to automagicaly convert PDX to an Order.
Though, I don't think any of this should be necessary as GemFire should be doing the right thing in this case. Sorry I don't have a better answer right now. Let's see what I find out.
Cheers and stay tuned!
See subsequent post for test code.
I am using jmeter with Java Request samplers. These call java classes I have written which returns a SampleResult object which contains the timing metrics for the use case. SampleResult is a tree and can have child SampleResult objects (SampleResult.addSubResult method). I cant seem to find a good way in jmeter to track the sub results so I can only easily get the results for the parent SampleResult.
Is there a listener in jmeter that allows me to see statistics / graphs for sub results (for instance see the average time across all sub results with the same name).
I have just succeeded in doing this, and wanted to share it. If you follow the instructions I provide here, it will work for you as well. I did this for the summary table listener. And, I did it on Windows. And, I used Eclipse
Steps:
Go to JMeter's web site and download the source code. You can find that here, for version 3.0.
http://jmeter.apache.org/download_jmeter.cgi
One there, I clicked the option to download the Zip file for the Source.
Then, on that same page, download the binary for version 3.0, if you have not already done so. Then, extract that zip file onto your hard drive.
Once you've extracted the zip file to your hard drive, grab the file "SummaryReport.java". It can be found here: "\apache-jmeter-3.0\src\components\org\apache\jmeter\visualizers\SummaryReport.java"
Create a new class in Eclipse, then Copy/Paste all of that code into your new class. Then, rename your class from what it is, "SummaryReport" to a different name. And everywhere in the code, replace "SummaryReport" with the new name of your class.
I am using Java 8. So, there is one line of code that won't compile for me. It's the line below.
private final Map tableRows = new ConcurrentHashMap<>();
You need to remove the <> on that line, as Java 1.8 doesn't support it. Then, it will compile
There was one more line that gave a compile error. It was the one below.
CSVSaveService.saveCSVStats(StatGraphVisualizer.getAllTableData(model, FORMATS),writer,`
saveHeaders.isSelected() ? StatGraphVisualizer.getLabels(COLUMNS) : null);
Firstly, it wasn't finding the source for class StatGraphVisualizer. So, I imported it, as below.
import org.apache.jmeter.visualizers.StatGraphVisualizer;
Secondly, it wasn't finding the method "getLabels" in "StatGraphVisualizer.getLabels." So, here is what this line of code looked like after I fixed it. It is seen below.
CSVSaveService.saveCSVStats(StatGraphVisualizer.getAllTableData(model, FORMATS),writer);
That compiles. That method doesn't need the second argument.
Now, everything should compile.
Find this method below. This is where you will begin adding your customizations.
#Override
public void add(final SampleResult res) {
You need to create an array of all of your sub results, as I did, as seen below. The line in Bold is the new code. (All new code is seen in Bold).
public void add(final SampleResult res) {
final String sampleLabel = res.getSampleLabel(); // useGroupName.isSelected());
**final SampleResult[] theSubResults = res.getSubResults();**
Then, create a String for each label for your sub results objects, as seen below.
**final String writesampleLabel = theSubResults[0].getSampleLabel(); // (useGroupName.isSelected());
final String readsampleLabel = theSubResults[1].getSampleLabel(); // (useGroupName.isSelected());**
Next, go to the method below.
JMeterUtils.runSafe(false, new Runnable() {
#Override
public void run() {
The new code added is below, in Bold.
JMeterUtils.runSafe(false, new Runnable() {
#Override
public void run() {
Calculator row = null;
**Calculator row1 = null;
Calculator row2 = null;**
synchronized (lock) {
row = tableRows.get(sampleLabel);
**row1 = tableRows.get(writesampleLabel);
row2 = tableRows.get(readsampleLabel);**
if (row == null) {
row = new Calculator(sampleLabel);
tableRows.put(row.getLabel(), row);
model.insertRow(row, model.getRowCount() - 1);
}
**if (row1 == null) {
row1 = new Calculator(writesampleLabel);
tableRows.put(row1.getLabel(), row1);
model.insertRow(row1, model.getRowCount() - 1);
}
if (row2 == null) {
row2 = new Calculator(readsampleLabel);
tableRows.put(row2.getLabel(), row2);
model.insertRow(row2, model.getRowCount() - 1);
}**
} // close lock
/*
* Synch is needed because multiple threads can update the counts.
*/
synchronized(row) {
row.addSample(res);
}
**synchronized(row1) {
row1.addSample(theSubResults[0]);
}**
**synchronized(row2) {
row2.addSample(theSubResults[1]);
}**
That is all that needs to be customized.
In Eclipse, export your new class into a Jar file. Then place it inside of the lib/ext folder of your binary of Jmeter that you extracted, from Step 1 above.
Start up Jmeter, as you normally would.
In your Java sampler, add a new Listener. You will now see two "Summary Table" listeners. One of these will be the new one that you have just created. Once you have brought that new one into your Java Sampler, rename it to something unique. Then run your test and look at your new "Summary Table" listener. You will see summary results/stats for all of your sample results.
My next step is to perform these same steps for all of the other Listeners that I would like to customize.
I hope that this post helps.
Here is some of my plugin code which you can use as a starting point in writing your own plugin. I cant really post everything as there are really dozens of classes. Few things to know are:
my plugin like all visualizer plugins extends the jmeter class
AbstractVisualizer
you need the following jars in eclipse to complile:
jfxrt.jar,ApacheJMeter_core.jar
you need java 1.8 for javafx (the jar file comes in the sdk)
if you compile a plugin you need to put that in jmeter/lib/ext.
You also need to put the jars from bullet 2 in jmeter/lib
there is a method called "add(SampleResult)" in my class. This
will get called by the jmeter framework every time a java sample
completes and will pass the SampleResult as a parameter. Assuming you
have your own Java Sample classes that extend
AbstractJavaSamplerClient your class will have a method called
runTest which returns a sampleresult. That same return object will be
passed into your plugins add method.
my plugin puts all the sample results into a buffer and only
updates the screen every 5 results.
Here is the code:
import java.awt.BorderLayout;
import java.util.ArrayList;
import java.util.List;
import javafx.application.Platform;
import javafx.embed.swing.JFXPanel;
import javax.swing.border.Border;
import javax.swing.border.EmptyBorder;
import org.apache.jmeter.samplers.SampleResult;
import org.apache.jmeter.testelement.TestStateListener;
import org.apache.jmeter.visualizers.gui.AbstractVisualizer;
public class FxVisualizer extends AbstractVisualizer implements TestStateListener {
int currentId = 0;
/**
*
*/
private static final long serialVersionUID = 1L;
private static final int BUFFER_SIZE = 5;
#Override
public String getName()
{
return super.getName();//"George's sub result viewer.";
}
#Override
public String getStaticLabel()
{
return "Georges FX Visualizer";
}
#Override
public String getComment()
{
return "George wrote this plugin. There are many plugins like it but this one is mine.";
}
static Long initCount = new Long(0);
public FxVisualizer()
{
init();
}
private void init()
{
//LoggingUtil.debug("in FxVisualizer init()");
try
{
FxTestListener.setListener(this);
this.setLayout(new BorderLayout());
Border margin = new EmptyBorder(10, 10, 5, 10);
this.setBorder(margin);
//this.add(makeTitlePanel(), BorderLayout.NORTH);
final JFXPanel fxPanel = new JFXPanel();
add(fxPanel);
//fxPanel.setScene(getScene());
Platform.runLater(new Runnable() {
#Override
public void run() {
initFX(fxPanel);
}
});
}
catch(Exception e)
{
e.printStackTrace();
}
}
static FxVisualizerScene fxScene;
private static void initFX(JFXPanel fxPanel) {
// This method is invoked on the JavaFX thread
fxScene = new FxVisualizerScene();
fxPanel.setScene(fxScene.getScene());
}
final List <Event> bufferedEvents = new ArrayList<Event>();
#Override
public void add(SampleResult result)
{
final List <Event> events = ...;//here you need to take the result.getSubResults() parameter and get all the children events.
final List<Event> eventsToAdd = new ArrayList<Event>();
synchronized(bufferedEvents)
{
for (Event evt : events)
{
bufferedEvents.add(evt);
}
if (bufferedEvents.size() >= BUFFER_SIZE)
{
eventsToAdd.addAll(bufferedEvents);
bufferedEvents.clear();
}
}
if (eventsToAdd.size() > 0)
{
Platform.runLater(new Runnable() {
#Override
public void run() {
updatePanel(eventsToAdd);
}
});
}
}
public void updatePanel(List <Event> events )
{
for (Event evt: events)
{
fxScene.addEvent(evt);
}
}
#Override
public void clearData()
{
synchronized(bufferedEvents)
{
Platform.runLater(new Runnable() {
#Override
public void run() {
bufferedEvents.clear();
fxScene.clearData();
}
});
}
}
#Override
public String getLabelResource() {
return "Georges Java Sub FX Sample Listener";
}
Boolean isRunning = false;
#Override
public void testEnded()
{
final List<Event> eventsToAdd = new ArrayList<Event>();
synchronized(bufferedEvents)
{
eventsToAdd.addAll(bufferedEvents);
bufferedEvents.clear();
}
if (eventsToAdd.size() > 0)
{
Platform.runLater(new Runnable() {
#Override
public void run() {
updatePanel(eventsToAdd);
fxScene.testStopped();
}
});
}
}
Long testCount = new Long(0);
#Override
public void testStarted() {
synchronized(bufferedEvents)
{
Platform.runLater(new Runnable() {
#Override
public void run() {
updatePanel(bufferedEvents);
bufferedEvents.clear();
fxScene.testStarted();
}
});
}
}
#Override
public void testEnded(String arg0)
{
//LoggingUtil.debug("testEnded 2:" + arg0);
testEnded();
}
int registeredCount = 0;
#Override
public void testStarted(String arg0) {
//LoggingUtil.debug("testStarted 2:" + arg0);
testStarted();
}
}
OK so I just decided to write my own jmeter plugin and it is dead simple. Ill share the code for posterity when it is complete. Just write a class that extends AbstractVisualizer, compile it into a jar, then throw it into the jmeter lib/ext directory. That plugin will show up in the listeners section of jmeter when you go to add visualizers.