weld 2.4.2.Final maven test - maven

we face against the same problem than the one identified here : Weld using alternative producer from src/test/META-INF/beans.xml.
With maven and weld se, for test, weld does not use the beans.xml located in test/resources/META-INF.
We look on Google, and i the Jira issues of Weld, but it seems that others people do not have the same problem than us.
So does someone know how to use weld in Junit test with Maven and to achieve that Weld SE is configured by the beans.xml located in test/resources/META-INF ?

Weld itself cannot help you with this, Weld SE will in fact pick the first beans.xml on classpath. To be more precise, main/resources/META-INF/beans.xml will be used for bean archive main/java/ while test/resources/META-INF/beans.xml willbe used for bean archive test/java/. Therefore, mixing this up would be unwise at best and so Weld does not support it by default.
Anyhow, there are several options to solve your trouble.
Use Arquillian + Shrinkwrap
For CDI testing, this is the best way you can hope for and a very good one, once you learn it. Shrinkwrap will allow you to tailor the deployments exactly to your needs, including only the classes you want and also beans.xml you want. There is an Arquillian container for Weld SE which is even used in Weld SE testsuite, so you can inspire yourself there.
Disable discovery while bootstrapping Weld SE
Somewhere in your unit tests, you are starting Weld container. While doing so, you might use Weld#disableDiscovery(), which means you will create a synthetic bean archive. In such archive, default discovery is disabled and only things you specifically add (via methods such addBeanClasses, addPackage, ..) will land in the resulting archive. And since you ahve n odiscovery, no beans.xml will be picked up! Instead, you define all the alternatives, interceptors, ... (again there are methods for this on the Weld class). Now, I imagine this could be "easily" placed into some #Before method should you need to do this repeatedly. Alternatively, you could also use Weld-junit which will provide you with a JUnit #Rule and allows you to easily describe the deployment on per class basis.
Both of the above approaches should help you with that hardships. But as I said, there is no way to achieve this with just plain Weld SE.

Related

#Stateless and #Stateful #..scoped in Karaf while migrating CDI app to OSGi

I recently was fascinated by a migration scheme presented by the Talend http://coders.talend.com/
(I guess) via talk record here
and presentation from the talk made by Andrei Shakirin.
I'va seen a lot publications made by his colleague Christian Schneider as well.
And, of course, the part of the subject matter is Christian's blueprint-maven-plugin.
The presentation linked above says how it is possible to continuously deploy same code base to either common Java EE container like Tomcat and to OSGi container like Karaf which is exactly what I am interested in (not Tomcat but Wildfly or Glassfish for instance)
So, I wonder:
How blueprint-maven-plugin handles #Stateful, #Stateless annotations
and #ApplicationScoped, #SessionScoped, #RequestScoped etc. on beans
while producing blueprint file
Also, say I have a new code to write which would use CDI and I want that to be also deployable to Karaf. How should I then write
that piece? Should I avoid #Stateful, #Stateless annotations ?
How those annotations (if they for any reason irrelevant for the case when I deploy to OSGi) would be interpreted by that OSGi
container (Karaf) since those annotations DO present in the code ?
After a quick scan on the maven-blueprint-plugin source code. Seems like EJB annotations Stateless,Stateful and *Scope isn't handled by the plugin.
#Stateless and #Stateful isn't really part of CDI but an extension to it (EJB). If you really love what EJB do, consider using the OpenEJB feature in karaf. You should be able to do #Inject without these annotation as long as that is a valid bean.
Annotations are just a marker to source code (in other word, they don't do anything), it doesn't do anything unless there's a processor registered to handle them.
E.g. you can instantiate an EJB with new keyword and perform unit test on it (fulfill all injection with setter of course)
Disclaimer:
I don't really use this plugin myself, and there might be a newer version of plugin support these ejb annotations.

Migrating from Spring monolith application to OSGI

We have been building two suites of applications for the last 10 years using Spring as our dependency injection. We also use spring-batch and spring-amqp. We are now looking to move to OSGI so that our monolithic applications can be separated into bundles so that we can be more agile. The two suites are web applications and are deployed as two separate war files. We are looking to use Apache Karaf as our OSGI runtime.
Spring-DM is dead and it appears that we are going to have to convert EVERYTHING to use Blueprint for our dependency injection.
My question is how do we do this incrementally? It will be close to impossible to convert all of this over at once. It seems like one bundle should still be able to use Spring DI and have it's own application context as long as we take the responsibility to expose any services that we want to the service registry in the bundle activator, but I'm not sure if there is some kind of magic that we would lose like transaction management.
Any guidance on this would be really appreciated.
You might want to consider to make the problem appear even larger and switch to DS instead of Blueprint ... To take truly advantage of the OSGi model, DS is far superior to Blueprint in all aspects. In reality, after the first hurdle, you'll make much more progress and your gains will be higher. Though Blueprint made Spring available on OSGi, it never 'got' OSGi.
For strategy, keep your Spring app alive as a single bundle and move things out gradually. I.e. the elephant approach.
The biggest gain that OSGi provides can be summarized as follows:
Make sure modules have service APIs that ONLY handle collaboration. I.e. each service API should be a story/scenario how the actors work together, not how they come into existence and are configured.
Let Configuration Admin to the configuration work. I.e. never expose configuration APIs. In OSGi, you register instances, not things that still need to be configured.
Make sure you really understand the OSGi model with services. You might want to take a look at OSGi enRoute that leverages OSGi to the hilt.
I propose you take a look at the blueprint-maven-plugin. It allows to use a subset of CDI and JEE annoations to define injections as well as transactions and persistence. The plugin creates blueprint xml at build time which can then be executed by karaf. The big advantage is that these annotations are also supported by spring. So you can transition and in parallel release to production using spring.
I have a complete example here Annotation based blueprint and JPA.
Using this plugin I migrated a medium sized project while it was developed and released in parallel. If you need further advice while using the plugin I can surely help.

Spring Annotations when java file is compiled

I started learning spring today and i have a question regarding what happens to the annotations when java files with annotations is compiled ?.
The reason i am asking this is because of the fundamental difference i see when we choose to use the xml approach vs the annotations approach , and what i think is the philosophy of spring. The way i understand is spring says that all your java classes can be simple pojo's and all the spring related config should be kept independent (Like xml file.)
In case of developing spring application using xml *.java files have no idea about spring container and are compiled in to .class without any spring related dependencies.
But now when we annotate the .java file and the file is compiled the compiled file now has all spring related dependencies hard baked in to it and no longer are your classes simple pojo's.
Is this correct ? I am not sure if i am missing some thing here.
Annotations can be considered as metadata of a class or its element (method, field, local variable...). When you put annotation, you don't implement any behaviour. You just give additional info on an element.
That way, Spring, which is in charge of instanciating its bean can collect the info with reflection (see also this site) and process it.
To conclude, your Spring beans still remain POJO and there is no difference with the XML way (...from that point of view) since Spring gets from annotations the information it would have got from XML .
I think you are right and your question is justifiable, that's the way how I think about it too.
Not only compiled code but also dependency on spring jars bother me. Once you use this annotations your resulting jar depends on spring library.
It's reasonable to store beans in model according to DDD but spring is some kind of infrastructure layer so I didn't like the dependency.
Even if you would use XML, it's useful for few placed to use attributes. E.g. #Required attribute which is useful to verify that linked bean was injected. So, I've decide to use constructor dependency injection to omit this attribute, see my article. I completely leave out the dependency on spring in the code.
You can probably find such mind hook for many annotation you want/force to use.
You can use annotations only for your configuration classes, without marking them actual bean classes. In such scenario if you not use spring you just not load configuration classes.

Spring3, JAXB2, Java6, NamespacePrefixMapper questions

I built a simple Spring3, Hibernate3/(JPA2), RESTful service, hosted on Tomcat6, that uses JAXB2 to marshal the results. (It uses annotated pojos.) I needed to use specific namespace prefixes, so I wrote a custom com.sun.xml.bind.marshaller.NamespacePrefixMapper. I included the JAXB2 RI jars with my application and everything worked fine.
Then someone said that's great, we need to host it under WebLogic 11g (10.3.3) too. No problem, I created the special weblogic deployment descriptors to prefer the application jars, renamed my persistence.xml, and wrapped the WAR in an EAR with the JPA2 jars. It worked great, almost.
Unfortunately, our WebLogic server runs a custom security realm that also uses JAXB and causes conflicts with my application. So I dropped the JAXB jars from the app and it runs fine in WebLogic. Of course it no longer runs under Tomcat unless I add the JAXB jars to Tomcat. I'd like to avoid that.
So my questions... I've read quite a few posts on stackoverflow that contain a lot of opinions/disagreements regarding the use of the sun "internal" JAXB2 implementation vs. packaging the RI with your app. Is there not yet a clean solution to this problem? Does my stack support another way to custom map my namespace prefixes without including the JAXB2 RI? Can I safely use the Java6 "internal" JAXB NamespacePrefixMapper, or will that come and go with various Java releases? Does Spring3 offer another solution? What's the true story on the Java6 JAXB2 implementation? Is it only there for Sun's (Oracle's) internal use?
Thanks.
As mentioned in the comments, I'll summarise what is mentioned in http://www.func.nl/community/knowledgebase/customize-namespace-prefix-when-marshalling-jaxb.
Note: I haven't tried this myself, so it may not work.
Essentially, you configure the JAXB marshaller to use an XMLStreamWriter when marshalling, and you configure that to map prefixes, e.g.
XMLStreamWriter xmlStreamWriter = XMLOutputFactory.newInstance().createXMLStreamWriter(writer);
xmlStreamWriter.setPrefix("func", "http://www.func.nl");
JAXBContext context = JAXBContext.newInstance(object.getClass());
Marshaller marshaller = context.createMarshaller();
marshaller.marshal(object, xmlStreamWriter);
The idea is that if JAXB hasn't been given a prefix mapper, then it'll leave it up to the XMLStreamWriter to handle the prefixes, and by doing the above, you're telling it how to do it.
Again: I'm just repeating the content from the website that's blocked from your network, so I take no credit for it being right, and no blame for it being wrong.
The EclipseLink JAXB (MOXy) will use the namespace prefixes as declared in the #XmlSchema annotation.
For more information see:
How to customize namespace prefixes on Jersey(JAX-WS)
Define Spring JAXB namespaces without using NamespacePrefixMapper

Seam Equivalent of Spring PersistenceUnitPostProcessor

We have a very comfortable setup using JPA through Spring/Hibernate, where we attach a PersistenceUnitPostProcessor to our entity manager factory, and this post processor takes a list of project names, scans the classpath for jars that contain that name, and adds those jar files for scanning for entities to the persistence unit, this is much more convenient than specifying in persistence.xml since it can take partial names and we added facilities for detecting the different classpath configurations when we are running in a war, a unit test, an ear, etc.
Now, we are trying to replace Spring with Seam, and I cant find a facility to accomplish the same hooking mechanism. One Solution is to try and hook Seam through Spring, but this solution has other short-comings on our environment. So my question is: Can someone point me to such a facility in Seam if exists, or at least where in the code I should be looking if I am planning to patch Seam?
Thanks.
If you're running in a Java EE container like JBoss 6 (and I really recommend so), all you need is to package your beans into a jar, place a META-INF/persistence.xml inside it and place the jar into your WAR or EAR package. All #Entity annotated beans inside the jar will be processed.
For unit-testing, you could point the <jar-file> element to the generated .class output directory and Hibernate will also pick the Entities. Or even configure during runtime using Ejb3Configuration.addAnnotatedClass;
#see http://docs.jboss.org/hibernate/entitymanager/3.6/reference/en/html/configuration.html

Resources