Spring3, JAXB2, Java6, NamespacePrefixMapper questions - spring

I built a simple Spring3, Hibernate3/(JPA2), RESTful service, hosted on Tomcat6, that uses JAXB2 to marshal the results. (It uses annotated pojos.) I needed to use specific namespace prefixes, so I wrote a custom com.sun.xml.bind.marshaller.NamespacePrefixMapper. I included the JAXB2 RI jars with my application and everything worked fine.
Then someone said that's great, we need to host it under WebLogic 11g (10.3.3) too. No problem, I created the special weblogic deployment descriptors to prefer the application jars, renamed my persistence.xml, and wrapped the WAR in an EAR with the JPA2 jars. It worked great, almost.
Unfortunately, our WebLogic server runs a custom security realm that also uses JAXB and causes conflicts with my application. So I dropped the JAXB jars from the app and it runs fine in WebLogic. Of course it no longer runs under Tomcat unless I add the JAXB jars to Tomcat. I'd like to avoid that.
So my questions... I've read quite a few posts on stackoverflow that contain a lot of opinions/disagreements regarding the use of the sun "internal" JAXB2 implementation vs. packaging the RI with your app. Is there not yet a clean solution to this problem? Does my stack support another way to custom map my namespace prefixes without including the JAXB2 RI? Can I safely use the Java6 "internal" JAXB NamespacePrefixMapper, or will that come and go with various Java releases? Does Spring3 offer another solution? What's the true story on the Java6 JAXB2 implementation? Is it only there for Sun's (Oracle's) internal use?
Thanks.

As mentioned in the comments, I'll summarise what is mentioned in http://www.func.nl/community/knowledgebase/customize-namespace-prefix-when-marshalling-jaxb.
Note: I haven't tried this myself, so it may not work.
Essentially, you configure the JAXB marshaller to use an XMLStreamWriter when marshalling, and you configure that to map prefixes, e.g.
XMLStreamWriter xmlStreamWriter = XMLOutputFactory.newInstance().createXMLStreamWriter(writer);
xmlStreamWriter.setPrefix("func", "http://www.func.nl");
JAXBContext context = JAXBContext.newInstance(object.getClass());
Marshaller marshaller = context.createMarshaller();
marshaller.marshal(object, xmlStreamWriter);
The idea is that if JAXB hasn't been given a prefix mapper, then it'll leave it up to the XMLStreamWriter to handle the prefixes, and by doing the above, you're telling it how to do it.
Again: I'm just repeating the content from the website that's blocked from your network, so I take no credit for it being right, and no blame for it being wrong.

The EclipseLink JAXB (MOXy) will use the namespace prefixes as declared in the #XmlSchema annotation.
For more information see:
How to customize namespace prefixes on Jersey(JAX-WS)
Define Spring JAXB namespaces without using NamespacePrefixMapper

Related

SpringBoot creating a framework starter library

I am creating a library using spring-boot (v2.1.6.RELEASE) as a starter project that will facilitate as base extension jar responsible for configuring and starting up some of the components based on client project properties file.
The issue I am facing is that if the client project's SpringBoot Application class contains the same package path as library everything works like charm! but when client project contains different package path and includes ComponentScan, it is not able to load or start components from the library.
Did anyone encounter this issue? how to make client application to auto-configure some of the components from library jar?
Note: I am following the library creation example from here: https://www.baeldung.com/spring-boot-custom-starter
There are many things that can go wrong here, without seeing relevant parts of actual code its hard to tell something concrete. Out of my head, here are a couple of points for consideration that can hopefully lead to the solution:
Since we use starters in our applications (and sometimes people use explicit component scanning in there spring applications) and this obviously works, probably the issue is with the starter module itself. Don't think that the fact that the component scan is used alone prevents the starter from being loaded ;)
Make sure the starter is a: regular library and not packaged as a spring boot application (read you don't use spring boot plugin) and have <packaging>jar</packaging> in your pom.xml or whatever you use to build.
Make sure you have: src/main/resources/META-INF/spring.factories file
(case sensitive and everything)
Make sure that this spring.factories file indeed contains a valid reference on your configuration (java class annotated with #Configuration). If you use component scanning in the same package, it will find and load this configuration even without spring factories, in this case, its just kind of another portion of your code just packaged as a separate jar. So this looks especially "suspicious" to me.
Make sure that #Configuration doesn't have #Conditional-something - maybe this condition is not obeyed and the configuration doesn't start. For debugging purposes maybe you even should remove these #Conditional annotations just to ensure that the Configuration starts. You can also provide some logging inside the #Configuration class, like: "loading my cool library".

Spring AOP with AspectJ - Load time weaving doubts

Reading the Spring AOP documentation (link), I'm having a hard time (maybe also because english is not my native language) understanding these paragraphs.
First, I read
Further, in certain environments, this support enables load-time
weaving without making any modifications to the application server’s
launch script that is needed to add
-javaagent:path/to/aspectjweaver.jar or (as we describe later in
this section)
-javaagent:path/to/org.springframework.instrument-{version}.jar
(previously named spring-agent.jar).
And
Developers modify one or more files that form the application context
to enable load-time weaving
Which files? #Aspect classes and aop.xml files?
Then, when describing an example in the same sub-chapter, they say
We have one last thing to do. The introduction to this section did say
that one could switch on LTW selectively on a per-ClassLoader basis
with Spring, and this is true. However, for this example, we use a
Java agent (supplied with Spring) to switch on the LTW. We use the
following command to run the Main class shown earlier:
And they apply a Java Agent to the JVM.
-javaagent:C:/projects/foo/lib/global/spring-instrument.jar
Now I have a couple of doubts.
If I #EnableLoadTimeWeaving, do I need the spring-instrument Jar file as Java Agent?
I suppose the answer is yes, because we need to add bytecode to the class file before loading it. But a confirmation would be much appreciated.
The Jar naming is a little ambiguos, first they mention spring-agent.jar, then they use org.springframework.instrument-{version}.jar, and then spring-instrument.jar.
Are we always talking about the same Jar file?
I see from another question you asked that you are using Spring Boot and running a fat jar. In this case you don't need #EnableLoadTimeWeaving or spring-instrument (formerly known as spring-agent). Just ignore them if you are not running in an appserver for which you don't control the agent path.
I opened an issue for you about the confusion in the docs: https://github.com/spring-projects/spring-framework/issues/22429.

#Stateless and #Stateful #..scoped in Karaf while migrating CDI app to OSGi

I recently was fascinated by a migration scheme presented by the Talend http://coders.talend.com/
(I guess) via talk record here
and presentation from the talk made by Andrei Shakirin.
I'va seen a lot publications made by his colleague Christian Schneider as well.
And, of course, the part of the subject matter is Christian's blueprint-maven-plugin.
The presentation linked above says how it is possible to continuously deploy same code base to either common Java EE container like Tomcat and to OSGi container like Karaf which is exactly what I am interested in (not Tomcat but Wildfly or Glassfish for instance)
So, I wonder:
How blueprint-maven-plugin handles #Stateful, #Stateless annotations
and #ApplicationScoped, #SessionScoped, #RequestScoped etc. on beans
while producing blueprint file
Also, say I have a new code to write which would use CDI and I want that to be also deployable to Karaf. How should I then write
that piece? Should I avoid #Stateful, #Stateless annotations ?
How those annotations (if they for any reason irrelevant for the case when I deploy to OSGi) would be interpreted by that OSGi
container (Karaf) since those annotations DO present in the code ?
After a quick scan on the maven-blueprint-plugin source code. Seems like EJB annotations Stateless,Stateful and *Scope isn't handled by the plugin.
#Stateless and #Stateful isn't really part of CDI but an extension to it (EJB). If you really love what EJB do, consider using the OpenEJB feature in karaf. You should be able to do #Inject without these annotation as long as that is a valid bean.
Annotations are just a marker to source code (in other word, they don't do anything), it doesn't do anything unless there's a processor registered to handle them.
E.g. you can instantiate an EJB with new keyword and perform unit test on it (fulfill all injection with setter of course)
Disclaimer:
I don't really use this plugin myself, and there might be a newer version of plugin support these ejb annotations.

Spring Annotations when java file is compiled

I started learning spring today and i have a question regarding what happens to the annotations when java files with annotations is compiled ?.
The reason i am asking this is because of the fundamental difference i see when we choose to use the xml approach vs the annotations approach , and what i think is the philosophy of spring. The way i understand is spring says that all your java classes can be simple pojo's and all the spring related config should be kept independent (Like xml file.)
In case of developing spring application using xml *.java files have no idea about spring container and are compiled in to .class without any spring related dependencies.
But now when we annotate the .java file and the file is compiled the compiled file now has all spring related dependencies hard baked in to it and no longer are your classes simple pojo's.
Is this correct ? I am not sure if i am missing some thing here.
Annotations can be considered as metadata of a class or its element (method, field, local variable...). When you put annotation, you don't implement any behaviour. You just give additional info on an element.
That way, Spring, which is in charge of instanciating its bean can collect the info with reflection (see also this site) and process it.
To conclude, your Spring beans still remain POJO and there is no difference with the XML way (...from that point of view) since Spring gets from annotations the information it would have got from XML .
I think you are right and your question is justifiable, that's the way how I think about it too.
Not only compiled code but also dependency on spring jars bother me. Once you use this annotations your resulting jar depends on spring library.
It's reasonable to store beans in model according to DDD but spring is some kind of infrastructure layer so I didn't like the dependency.
Even if you would use XML, it's useful for few placed to use attributes. E.g. #Required attribute which is useful to verify that linked bean was injected. So, I've decide to use constructor dependency injection to omit this attribute, see my article. I completely leave out the dependency on spring in the code.
You can probably find such mind hook for many annotation you want/force to use.
You can use annotations only for your configuration classes, without marking them actual bean classes. In such scenario if you not use spring you just not load configuration classes.

Is it possible to reuse Axis2 beans for a CXF service implementation?

I'm adding a CXF interface to my Axis2 web application, and was wondering if it was possible to use all the generated Axis2 beans instead of having 2 sets.
Right now, I've maneuvered the project enough to the point that it compiles (using maven), but when starting it up in tomcat I get a lot of these errors:
#XmlAttribute/#XmlValue need to reference a Java type that maps to text in XML.
This error comes from the Axis2 beans, and I'm guessing it's due to the fact that Axis2 adds #XmlAttribute annotations in its generated sources. My question is, is there an easier way to use Axis2 beans with CXF? Is it even possible? Would commenting out the #XmlAttribute lines before compilation fix my problem?
Adding a maven-antrun-plugin task to find and replace #Xml with //#Xml seemed to work. I haven't tested the independent services, but they are both running without errors. If anyone has a better solution, I'm all ears. Mine seems a little hacky.

Resources