Recompile with new class definition for mutation testing - chronicle

I am trying to use openHFT/java-runtime-compiler to improve my mutation testing tools, from heavy use of disk-access to only use in-memory-compilation.
In mutation testing, there was a two kind of class:
A. the mutated class, a class that its definition will be constantly manipulated/altered, and recompiled.
B. other class, a class that its definition will not be change, i.e. test case class, or other class that needed by the mutated class.
By using openHFT/java-runtime-compiler, it can be easily done by using below code, it was by creating a new classLoader for each recompilation of both the mutated class, and other class.
String BSourceCode = loadFromFiles(); //class definition loaded
for( someIterationCondition ){
String ASourceCode = mutation(); //some operation of generation/manipulation/mutation of ASourceCode
ClassLoader classLoader = new ClassLoader() { };
Class AClass = CompilerUtils.CACHED_COMPILER.loadFromJava( classLoader, "A" , ASourceCode);
Class BClass = CompilerUtils.CACHED_COMPILER.loadFromJava( classLoader, "B" , BSourceCode);
}
That works well, each time a new definition of class A compiled, the AClass will adjust to the new definition.
But, this wouldn't work if the sequence was reversed, like in below code (BClass loaded first then AClass), which sometimes needed, like when the AClass use BClass. The recompilation of class A, would not adjust to the new definition, and will always use the first definition that used to compile class A.
String BSourceCode = loadFromFiles(); //class definition loaded
for( someIterationCondition ){
String ASourceCode = mutation(); //some operation of generation/manipulation/mutation of ASourceCode
ClassLoader classLoader = new ClassLoader() { };
Class BClass = CompilerUtils.CACHED_COMPILER.loadFromJava( classLoader, "B" , BSourceCode);
Class AClass = CompilerUtils.CACHED_COMPILER.loadFromJava( classLoader, "A" , ASourceCode);
}
I suspect that I needed to modify the loadFromJava class from openHFT/java-runtime-compiler library (below code). I already try by omitting the lines
//if (clazz != null)
//return clazz;
that I expected to make it always recompiled all the source code (even the already compiled one) for every times loadFromJava called. But it gives wrong results.
Please help me to point out the change needed to make it to works.
public Class loadFromJava(#NotNull ClassLoader classLoader,
#NotNull String className,
#NotNull String javaCode,
#Nullable PrintWriter writer) throws ClassNotFoundException {
Class clazz = null;
Map<String, Class> loadedClasses;
synchronized (loadedClassesMap) {
loadedClasses = loadedClassesMap.get(classLoader);
if (loadedClasses == null){
loadedClassesMap.put(classLoader, loadedClasses = new LinkedHashMap<String, Class>());
}else{
clazz = loadedClasses.get(className);
}
}
PrintWriter printWriter = (writer == null ? DEFAULT_WRITER : writer);
if (clazz != null)
return clazz;
for (Map.Entry<String, byte[]> entry : compileFromJava(className, javaCode, printWriter).entrySet()) {
String className2 = entry.getKey();
synchronized (loadedClassesMap) {
if (loadedClasses.containsKey(className2))
continue;
}
byte[] bytes = entry.getValue();
if (classDir != null) {
String filename = className2.replaceAll("\\.", '\\' + File.separator) + ".class";
boolean changed = writeBytes(new File(classDir, filename), bytes);
if (changed) {
LOG.info("Updated {} in {}", className2, classDir);
}
}
Class clazz2 = CompilerUtils.defineClass(classLoader, className2, bytes);
synchronized (loadedClassesMap) {
loadedClasses.put(className2, clazz2);
}
}
synchronized (loadedClassesMap) {
loadedClasses.put(className, clazz = classLoader.loadClass(className));
}
return clazz;
}
Thank you very much for all your help.
Edited
Thanks Peter Lawrey, I have tried your suggestion, but its give the same result, the A class is stick to the first definition used (in the first iteration), and fail to change/use to a new definition (in the next iteration).
I have gathered symptom's and the possible explanation was there were some different treating of the first iteration (first time class was compiled/loaded) than the next iteration. From there I try a couple thing.
1st Symptoms
It was when I put an output line (System.out.println) in the loadFromJava (below)
Class clazz = null;
Map<String, Class> loadedClasses;
synchronized (loadedClassesMap) {
loadedClasses = loadedClassesMap.get(classLoader);
if (loadedClasses == null){
loadedClassesMap.put(classLoader, loadedClasses = new LinkedHashMap<String, Class>());
System.out.println("loadedClasses Null "+className);
}else{
clazz = loadedClasses.get(className);
if(clazz == null)
System.out.println("clazz Null "+className);
else
System.out.println("clazz not Null "+className);
}
}
the output gives:
1st Iteration (new ClassLoader and new CachedCompiler)
(when loading B):loadedClasses Null
(when loading A):clazz Null
next Iteration (new ClassLoader and new CachedCompiler)
(when loading B):loadedClasses Null
(when loading A):clazz Not Null
in the first iteration, it was giving the right output, "loadClasses Null" (when loading B) because loadedClassesMap doesn't have the classLoader, and give "clazz Null" (when loading A) because the loadedClassesMap have the classLoader but doesn't have the A classname.
However in the next iteration, (when loading A) it output "clazz Not Null", it seems A classname already stored in loadedClassesMap.get(classLoader), which is not supposed to happen. I have try to clear the loadedClassesMap in the CachedCompiler constructor.
loadedClassesMap.clear();
but it gives LinkageError: loader (instance of main/Utama$2): attempted duplicate class definition.
2nd symptomps
the more strong symptoms of the differentiation in the first iteration was when I check the s_fileManager buffer.
1st Iteration (new ClassLoader and new CachedCompiler)
(when load B):CompilerUtils.s_fileManager.getAllBuffers().size()=1
(when load A):CompilerUtils.s_fileManager.getAllBuffers().size()=2
Next Iteration (new ClassLoader and new CachedCompiler)
(when load B):CompilerUtils.s_fileManager.getAllBuffers().size()=2
(when load A):CompilerUtils.s_fileManager.getAllBuffers().size()=2
The 1st iteration was as expected, but in the next Iteration, s_fileManager buffer seems to already got size 2, and not reset to 0.
I have tried to clear FileManager Buffer at CachedCompiler constructor (below),
CompilerUtils.s_fileManager.clearBuffers();
but it gives ExceptionInInitializerError.

If you want to use a fresh set of classes, I suggest not using the same cache of classes.
String BSourceCode = loadFromFiles(); //class definition loaded
for( someIterationCondition ){
String ASourceCode = mutation(); //some operation of generation/manipulation/mutation of ASourceCode
ClassLoader classLoader = new ClassLoader() { };
CachedCompiler compiler = new CachedCompiler(null, null)
Class AClass = compiler.loadFromJava( classLoader, "A" , ASourceCode);
Class BClass = compiler.loadFromJava( classLoader, "B" , BSourceCode);
}
This will use a new cache each time and not be affected by class loaded in a previous test.

Related

How will neo4j jdbc (bolt) handle queries that return a list of nodes?

In neo4j jdbc (bolt), Node is returned as Map , but if you make a query that returns a list of Nodes, getObject () will return a list of InternalNodes. Entities in this list can not be identified by type instanceof, so reflection will identify the node by type name and you will get the value by calling the method by reflection.You can get the value by doing the following, but is this approach correct? rs is ResultSet.entity is return value of this method.
Object columnObject = rs.getObject(columnName);
if (columnObject instanceof List<?>){
List<Map<String,Object>> objectValue = arrayList();
Array columnArray = rs.getArray(columnName);
Object[] columnArrayValues = (Object[]) columnArray.getArray();
for (int iTmp = 0; iTmp < columnArrayValues.length; iTmp++){
Map<String, Object> colArrayItemMap = new HashMap<>();
Object colItemObj = columnArrayValues[iTmp];
Class colItemClass = colItemObj.getClass();
if (colItemClass.getName().equals("org.neo4j.driver.internal.InternalNode")){
Method asMap = colItemClass.getMethod("asMap");
Method getId = colItemClass.getMethod("id");
Method getLabels = colItemClass.getMethod("labels");
colArrayItemMap.put("_id", getId.invoke(colItemObj));
colArrayItemMap.put("_labels", getLabels.invoke(colItemObj));
colArrayItemMap.putAll((Map<? extends String, ?>) asMap.invoke(colItemObj));
} else {
colArrayItemMap.put("_raw", columnArrayValues[iTmp]);
}
objectValue.add(colArrayItemMap);
}
((Map) entity).put(propertyName, objectValue);
} else {
((Map) entity).put(propertyName, columnObject);
}
Such queries are generated by such cypher statements.Such queries are generated by such cypher statements.
MATCH
(input:Input),
(output:Output)
WITH input, output
MATCH
(input)-[:INPUT*1]->(in),
(out)-[:OUTPUT*1]->(output),
g = (in)-[connect:CONNECT*0..5]->(out)
RETURN
input, output, extract(x IN nodes(g)|x) as nodes
It was for different class loaders that we can not identify with the instanceof operator.Since the jdbc driver was placed in Tomcat / lib, it was judged to be different from the class loaded by the application.
In any case, it will be provided by converting List to List or until getResults() is supported as the return value of getArray() It is thought that it is necessary to write.

Hibernate queries getting slower and slower

I'm working on a process that checks and updates data from Oracle database. I'm using hibernate and spring framework in my application.
The application reads a csv file, processes the content, then persiste entities :
public class Main() {
Input input = ReadCSV(path);
EntityList resultList = Process.process(input);
WriteResult.write(resultList);
...
}
// Process class that loops over input
public class Process{
public EntityList process(Input input) :
EntityList results = ...;
...
for(Line line : input.readLine()){
results.add(ProcessLine.process(line))
...
}
return results;
}
// retrieving and updating entities
Class ProcessLine {
#Autowired
DomaineRepository domaineRepository;
#Autowired
CompanyDomaineService companydomaineService
#Transactional
public MyEntity process(Line line){
// getcompanyByXX is CrudRepository method with #Query that returns an entity object
MyEntity companyToAttach = domaineRepository.getCompanyByCode(line.getCode());
MyEntity companyToDetach = domaineRepository.getCompanyBySiret(line.getSiret());
if(companyToDetach == null || companyToAttach == null){
throw new CustomException("Custom Exception");
}
// AttachCompany retrieves some entity relationEntity, then removes companyToDetach and adds CompanyToAttach. this updates relationEntity.company attribute.
companydomaineService.attachCompany(companyToAttach, companyToDetach);
return companyToAttach;
}
}
public class WriteResult{
#Autowired
DomaineRepository domaineRepository;
#Transactional
public void write(EntityList results) {
for (MyEntity result : results){
domaineRepository.save(result)
}
}
}
The application works well on files with few lines, but when i try to process large files (200 000 lines), the performance slows drastically, and i get a SQL timeout.
I suspect cache issues, but i'm wondering if saving all the entities at the end of the processing isn't a bad practice ?
The problem is your for loop which is doing individual saves on the result and thus does single inserts slowing it down. Hibernate and spring support batch inserts and should be done when ever possible.
something like domaineRepository.saveAll(results)
Since you are processing lot of data it might be better to do things in batches so instead of getting one company to attach you should get a list of companies to attach processes those then get a list of companies to detach and process those
public EntityList process(Input input) :
EntityList results;
List<Code> companiesToAdd = new ArrayList<>();
List<Siret> companiesToRemove = new ArrayList<>();
for(Line line : input.readLine()){
companiesToAdd.add(line.getCode());
companiesToRemove.add(line.getSiret());
...
}
results = process(companiesToAdd, companiesToRemove);
return results;
}
public MyEntity process(List<Code> companiesToAdd, List<Siret> companiesToRemove) {
List<MyEntity> attachList = domaineRepository.getCompanyByCodeIn(companiesToAdd);
List<MyEntity> detachList = domaineRepository.getCompanyBySiretIn(companiesToRemove);
if (attachList.isEmpty() || detachList.isEmpty()) {
throw new CustomException("Custom Exception");
}
companydomaineService.attachCompany(attachList, detachList);
return attachList;
}
The above code is just sudo code to point you in the right direction, will need to work out what works for you.
For every line you read you are doing 2 read operations here
MyEntity companyToAttach = domaineRepository.getCompanyByCode(line.getCode());
MyEntity companyToDetach = domaineRepository.getCompanyBySiret(line.getSiret());
You can read more than one line and us the in query and then process that list of companies

Why can BDDMockito not resolve types in this case?

Consider DynamoDB's QueryApi. Through a series of (unfortunate?) hoops,
ItemCollection<QueryOutcome>>
ends up being equivalent to
Iterable<Item>
I know this because I can do:
public PuppyDog getPuppy(final String personGuid, final String name) {
final QuerySpec spec = new QuerySpec()
.withKeyConditionExpression("#d = :guid and #n = :name")
.withNameMap(new NameMap().with("#d", "guid").with("#n", "name"))
.withValueMap(new ValueMap().withString(":guid", personGuid).withString(":name", name));
return getDog(index.query(spec));
}
private PuppyDog getDog(final Iterable<Item> itemCollection) {
// http://stackoverflow.com/questions/23932061/convert-iterable-to-stream-using-java-8-jdk
return StreamSupport.stream(itemCollection.spliterator(), false)
.map(this::createDogFor)
// it would be a little weird to find more than 1, but not sure what to do if so.
.findAny().orElse(new PuppyDog());
}
But when I try to write tests in Mockito using BDDMockito:
#Test
public void canGetPuppyDogByPersonGuidAndName() {
final PuppyDog dawg = getPuppyDog();
final ArgumentCaptor<QuerySpec> captor = ArgumentCaptor.forClass(QuerySpec.class);
final ItemCollection<QueryOutcome> items = mock(ItemCollection.class);
given(query.query(captor.capture())).willReturn(items);
}
The compiler complains when I try to make items an Iterable.
Why dis?
Dis not because of BDDMockito. Dis because ItemCollection<QueryOutcome> simply can't be cast safely into
Iterable<Item>. It can be cast into Iterable<QueryOutcome> or even Iterable<? extends Item>, but not Iterable<Item>.
Otherwise you could do this:
final ItemCollection<QueryOutcome> items = mock(ItemCollection.class);
Collection<Item> yourItemCollection = items;
yourItemCollection.add(itemThatIsNotAQueryOutcome); // violating safety of items
See also:
Is List<Dog> a subclass of List<Animal>? Why aren't Java's generics implicitly polymorphic?
Why are arrays covariant but generics are invariant?

How to exclude null value when using FsCheck Property attribute?

I need to write a simple method that receives a parameter (e.g. a string) and does smth. Usually I'd end up with two tests. The first one would be a guard clause. The second would validate the expected behavior (for simplicity, the method shouldn't fail):
[Fact]
public void DoSmth_WithNull_Throws()
{
var sut = new Sut();
Assert.Throws<ArgumentNullException>(() =>
sut.DoSmth(null));
}
[Fact]
public void DoSmth_WithValidString_DoesNotThrow()
{
var s = "123";
var sut = new Sut();
sut.DoSmth(s); // does not throw
}
public class Sut
{
public void DoSmth(string s)
{
if (s == null)
throw new ArgumentNullException();
// do smth important here
}
}
When I try to utilize the FsCheck [Property] attribute to generate random data, null and numerous other random values are passed to the test which at some point causes NRE:
[Property]
public void DoSmth_WithValidString_DoesNotThrow(string s)
{
var sut = new Sut();
sut.DoSmth(s); // throws ArgumentNullException after 'x' tests
}
I realize that this is the entire idea of FsCheck to generate numerous random data to cover different cases which is definitely great.
Is there any elegant way to configure the [Property] attribute to exclude undesired values? (In this particular test that's null).
FsCheck has some built-in types that can be used to signal specific behaviour, like, for example, that reference type values shouldn't be null. One of these is NonNull<'a>. If you ask for one of these, instead of asking for a raw string, you'll get no nulls.
In F#, you'd be able to destructure it as a function argument:
[<Property>]
let DoSmth_WithValidString_DoesNotThrow (NonNull s) = // s is already a string here...
let sut = Sut ()
sut.DoSmth s // Use your favourite assertion library here...
}
I think that in C#, it ought to look something like this, but I haven't tried:
[Property]
public void DoSmth_WithValidString_DoesNotThrow(NonNull<string> s)
{
var sut = new Sut();
sut.DoSmth(s.Get); // throws ArgumentNullException after 'x' tests
}

Can set be instantiated?

I was reading about Collections, when this question stuck me.
Following is the code I wrote to test my doubt.
public static void main(String[] args) {
TreeMap<Integer, String> tree = new TreeMap<Integer, String>();
tree.put(1, "1");
tree.put(2, "2");
Set<Integer> set = tree.keySet();
System.out.println(set instanceof Set);
System.out.println(set instanceof HashSet);
}
Result :
true
false
Above code says that my set object is a instance of Set. but Set is an Interface how can it be instantiated. I'm confused. :(
Set is an interface, so no, you cannot directly instantiate it. Interfaces would be pretty useless if you couldn't have an instance of one, though! The instance returned by tree.keySet() is some concrete implementation of the Set interface.
Let's get super-specific, and look at the TreeMap#keySet() source code:
public Set<K> keySet() {
return navigableKeySet();
}
Okay, that doesn't tell us much. We need to drill down:
public NavigableSet<K> navigableKeySet() {
KeySet<K> nks = navigableKeySet;
return (nks != null) ? nks : (navigableKeySet = new KeySet(this));
}
So the concrete type returned is a KeySet! There's your implementation of the Set interface. http://www.docjar.com/html/api/java/util/TreeMap.java.html#1021
Which explains this:
System.out.println(set instanceof Set); // prints true
System.out.println(set instanceof HashSet); // prints false
Set is an interface; HashSet is an implementation of that interface. foo instanceof Set will be true for every instance foo of any Set implementation. We already established that the concrete type of the object returned by TreeMap#keySet() is a KeySet, not a HashSet, so that explains why set instanceof HashSet is false – because set is a KeySet, so it cannot be a HashSet!
If that still doesn't make sense to you, read up on instanceof:
The instanceof operator compares an object to a specified type. You can use it to test if an object is an instance of a class, an instance of a subclass, or an instance of a class that implements a particular interface.

Resources