What are reasons for eclipselink failing to autodetect entity classes? - osgi

I'm running eclipselink in an OSGi container and my entity classes are in their own bundle.
I have repeatedly run into the problem that Eclipselink fails to autodetect the entity classes, resulting in "Xyz is not a known entity type" messages. This is painful to debug and my somewhat helpless approach is to more or less randomly tweak configuration files until it works.
I wish I knew a more systematic approach, but I don't seem to know enough about possible reasons for the problem. What could they be? Is there an overview of what happens in autodetection and what is required for it to work?
So if you ran into the problem yourself and were able to determine one specific reason, post it here, or vote it up when you already see it. That way we could produce a list of typical issues sorted by frequency. I'll add the ones I actually solved.
Facts I know:
eclipselink uses the OSGi extender pattern to listen for bundles registering and then sets them up
it supposedly uses the class loader for the bundle that defines the persistence unit, if you're using a persistence.xml for configuration, this is the bundle where that file should be located.

The eclipselink jpa is not able to persist objects of classes that extend entity classes. Those extended classes should be entity classes by itself.

The bundle with the entity classes doesn't have the correct JPA-PersistenceUnits header in its manifest. This header is how eclipselink finds out that there is a persistence unit to be processed.
If listing your classes explictly makes it work, the wrong / missing header was not your problem.

The entity class is not listed explicitly in the configuration of the persistence unit and the persistence.xml (or whatever config mechanism you use) doesn't set the exclude-unlisted-classes parameter to false (depending on whether you run Java SE or EE, it may be true by default).
If it helps to list your classes explicitly, this may be your problem.

Related

Is there a good way to document Spring #Value fields?

Over the course of writing Spring Boot apps, our team adds in a lot of #Value annotations to help make things configurable. At some point we start to lose track of exactly what we added and what can be configured. We get a lot of questions from the QA and DevOps teams about what exactly can be configured and what can't.
Currently we just do a grep through the code base and apply some crude regular expressions to try and parse out the meaningful pieces. But this doesn't catch 100% of cases and inevitably we end up digging through the code to find out what fields can be configured.
I know we could use JavaDoc to somewhat achieve our goal, but the documentation would be buried with other JavaDoc (methods, fields, classes, etc) and it's still reliant on developers to remember to add the JavaDoc to each field.
Has anyone found a more automated way to document their #Value fields? I'm thinking something like Swagger, but specifically for Spring and the various ways it can externalize configuration.
Javadoc is indeed a way to document for developers, not the QA or the operators.
Your question is really interesting but answering to that canonically is hard because #Value are implementation details of components. Swagger that you quote documents REST contracts, that is an important difference.
Here some ideas :
Writing a BDD test for them that could be used too as documentation makes really no sense functionally but technically it makes.
Indeed, you could write a BDD integration test (with Cucumber or any other library) where you document and test the presence of each expected property.
Not a perfect solution, but you could at least retrieve exposed properties and a little more with these Spring Boot actuators :
configprops : Displays a collated list of all #ConfigurationProperties.
env : Exposes properties from Spring’s ConfigurableEnvironment.
Whenever you can, favor #ConfigurationProperties injection to group properties that work together rather than #Value. Isolating them in #ConfigurationProperties classes and adding javadoc for them is not bad at all to document their presence and usage.
as suggested by caco3 you can also generate your own metadata by using the Annotation Processor :
You can easily generate your own configuration metadata file from
items annotated with #ConfigurationProperties...
The processor picks up both classes and methods that are annotated
with #ConfigurationProperties. The Javadoc for field values within
configuration classes is used to populate the description attribute.
It joins with the previous point : favoring #ConfigurationProperties whenever it is possible.

Should repositories in Spring Boot applications be tested directly?

Not sure if this will be considered a "legitimate question" or "purely opinion based", but is there a "best practice" with regards to directly testing a repository in a Spring Boot application? Or, should any integration testing simply target the associated service?
The reasoning for the question is simply the fact that, for the most part, a repository in a Spring Boot application contains no project-generated code. At best, it contains project-defined method signatures which Spring generates implementations for (assuming correct naming conventions).
Thanks...
If you can mess it up, you should test it. Here the opportunities to mess up can include:
Custom Queries (using #Query) might be wrong (there can be all kinds of logic mistakes or typos writing a query with no compile-time checking)
Repository methods where the query is derived from the method name might not be what you intended.
Arguments passed in, the type on the parameter list might not match the type needed in the query (nothing enforces this at compile time).
In all these cases you're not testing Spring Data JPA, you're testing the functionality you are implementing using Spring Data JPA.
Cases of using provided methods out of the box, like findOne, findAll, save, etc., where your fingerprints are not on it, don't need testing.
It's easy to test this stuff, and better to find the bugs earlier than later.
Yes, I think is a good pratice to do that. You could use #DataJpaTest annotation, it starts a in memory database. The official documentation says:
You can use the #DataJpaTest annotation to test JPA applications. By default, it configures an in-memory embedded database, scans for #Entity classes, and configures Spring Data JPA repositories. Regular #Component beans are not loaded into the ApplicationContext.
Link to the docs: https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-testing.html
Starting from the idea that repositories should be used only inside services and services are used to interact with the other layers of the system, I would say that testing services should be enough in the majority of cases.
I would not test standard repository methods like findAll, or findBy.., they were tested already and the purpose is not to test JPA, but rather the application.
The only repository methods that should have direct tests are the ones with custom queries. These queries may be located in a shared library and it is not efficient to write similar tests across different projects(in this case regression is a big concern)

Classpath scanning in OSGi

My project has a set of custom defined annotations that could be present in any bundle deployed in the OSGi 4.3 framework. I want to find any class with these annotations in the classpath. I tried using BundleWiring.listResources(...) and Bundle.loadClass(...) for each class found. I have done some tests with an small set of bundles and it needs almost 200MB of Permanent Generation JVM memory space because all classes are loaded.
Is there a way to free loaded classes PermGen memory space when the program realizes that they does not have these annotations?
Is there a better way to look for annotated classes in an OSGi framework?
I think you should not do annotation scanning as it slows down startup and needs a lot of memory. JEE application servers do annotation scanning at startup to make lazy programmers happy and the result is very annoying (e.g. scan for JPA or EJB annotations).
I guess you are implementing a technology where you can define the rules. I suggest that you should define rules that are similar to these:
Annotate your class
Have a MANIFEST header where the annotated class must be listed.
An even better solution can be to use a custom capability namespace with specified attributes. E.g.:
Provide-Capability: myNamespace;classes=com.foo.myClass1,com.foo.myClass2
In your technology, you should write a BundleTracker that calls:
BundleWiring.getCapabilities("myNamespace");
If the namespace is present, you can find the classes that should be processed.
If you implemented the technology, you can consider an extension to Bnd to fill that MANIFEST header automatically. That extension can be used than when bnd is started from the command line or from build tools like maven.
Btw.: You can use ASM to parse the class bytecode or use the built in possibility of Java to build up AST. Although those could work to solve the memory issue, I still think that you should define the list of classes directly in the MANIFEST header as it makes things much more clear. You can read the MANIFEST headers, you can check the capabilities on webconsole but you cannot do the same with bytecode.
Usually, classpath scanning for annotations is a bad idea in an OSGi context, as the classpath is more like a graph. However, there are situations where this can be useful. Hence, OSGi encourages the usage of the Whiteboard Pattern.
What you could possibly do is register each of these classes as services in the OSGi registry. Then, create a separate bundle that just tracks these services and transforms/manipulates them in some way. For example, this project scans for all classes annotated with #Path and #Provider annotations, and transforms them into Jersey REST APIs.

Spring Annotations when java file is compiled

I started learning spring today and i have a question regarding what happens to the annotations when java files with annotations is compiled ?.
The reason i am asking this is because of the fundamental difference i see when we choose to use the xml approach vs the annotations approach , and what i think is the philosophy of spring. The way i understand is spring says that all your java classes can be simple pojo's and all the spring related config should be kept independent (Like xml file.)
In case of developing spring application using xml *.java files have no idea about spring container and are compiled in to .class without any spring related dependencies.
But now when we annotate the .java file and the file is compiled the compiled file now has all spring related dependencies hard baked in to it and no longer are your classes simple pojo's.
Is this correct ? I am not sure if i am missing some thing here.
Annotations can be considered as metadata of a class or its element (method, field, local variable...). When you put annotation, you don't implement any behaviour. You just give additional info on an element.
That way, Spring, which is in charge of instanciating its bean can collect the info with reflection (see also this site) and process it.
To conclude, your Spring beans still remain POJO and there is no difference with the XML way (...from that point of view) since Spring gets from annotations the information it would have got from XML .
I think you are right and your question is justifiable, that's the way how I think about it too.
Not only compiled code but also dependency on spring jars bother me. Once you use this annotations your resulting jar depends on spring library.
It's reasonable to store beans in model according to DDD but spring is some kind of infrastructure layer so I didn't like the dependency.
Even if you would use XML, it's useful for few placed to use attributes. E.g. #Required attribute which is useful to verify that linked bean was injected. So, I've decide to use constructor dependency injection to omit this attribute, see my article. I completely leave out the dependency on spring in the code.
You can probably find such mind hook for many annotation you want/force to use.
You can use annotations only for your configuration classes, without marking them actual bean classes. In such scenario if you not use spring you just not load configuration classes.

Is it bad practice for a spring-based jar project to provide a bean configuration file?

If you have a library containing Spring beans that need to be wired together before an application can use them, does it make sense to include any sort of bean configuration file in the JAR (such as the /META-INF directory)? The idea is to give the application the option of importing this into its master Spring context configuration.
There may be more than one way to wire these beans, so I could provide a bean configuration file for each of the standard ways in which you'd typically wire them together.
Or, do I force the application to wire these up explicitly?
If it helps, the specifics of my problem involve a library I created to encapsulate our product's persistence layer. It contains Service, DAO and model beans. The DAO implementations currently use Hibernate (this probably won't change). Some of the DAO implementations need different kinds of Strategy beans injected into them (database encryption logic), depending on the type of database we are deploying on (MySQL vs SQL Server, etc). So we have potentially a few different configuration scenarios. I could also provide datasource bean configurations, relying on property substitution at the app level to inject all the particulars needed by the datasource.
Thanks for your input!
In this case, it's a good idea to provide some beans files, either as examples for documentation purposes, or as fully-fledged files ready for including into a wider context.
If your beans' wiring can get complex, then you shouldn't really leave it entirely up to the library client to figure it out.
This is more of a documentation and education task, really.

Resources