We are using websphere application server 8.5 for our enterprise applications.
I want to know is there any integration testing framework other than arquillian?
I tried running with arquillian with embedded and remote. Because embedded does not provide support for CDI we don't want to use it. And with remote we are not able to start our tests because of some security issue. Even if we try to solve that we cannot use #PersistenceContext or #Resource etc.
So I would like to know if there is any integration testing framework exclusively for websphere.
Thank you
P.S.
I think I misunderstood #PersistenceContext and #Resource. Please correct me if I am wrong.
I can use #PersistenceContext or #Resource in my actual application but not in my arquillian classes. Am I right? Earlier I thought I cannot use these anywhere in my code.
Secondly, as a quick test, I tried disabling administration security on WAS and the test case ran successfully.
I want to know is there any integration testing framework other than arquillian?
Currently there are not any good Java EE test alternatives to Arquillian that I know of. However, you can make a decent test framework using some very basic ant scripting and junit.
(See this answer for testing in Java EE for an example implementation)
I think I misunderstood #PersistenceContext and #Resource. Please correct me if I am wrong.
I can use #PersistenceContext or #Resource in my actual application but not in my arquillian classes. Am I right? Earlier I thought I cannot use these anywhere in my code.
If you're going to use #PersistenceContext or #Resource in a class, that class should be container managed (i.e. deployed in an application as part of an ear/war/ejb module)
For future reference:
Secondly, as a quick test, I tried disabling administration security
on WAS and the test case ran successfully
For secured server you need to add username/password and ssl config. For further information look here.
Because embedded does not provide support for CDI we don't want to use
it.
That is actually not true. Embedded containers do support CDI and according to the arquillian blog CDI is one of the few reasons to use them... Update: On a 2nd look you are right as shown here. The blog is probably talking about all the other containers...
What is not supported by embedded containers?
Remote interfaces are not supported in the embeddable container.
In any way the above quoted article provides a good starting point when to use which container type.
Related
I recently was fascinated by a migration scheme presented by the Talend http://coders.talend.com/
(I guess) via talk record here
and presentation from the talk made by Andrei Shakirin.
I'va seen a lot publications made by his colleague Christian Schneider as well.
And, of course, the part of the subject matter is Christian's blueprint-maven-plugin.
The presentation linked above says how it is possible to continuously deploy same code base to either common Java EE container like Tomcat and to OSGi container like Karaf which is exactly what I am interested in (not Tomcat but Wildfly or Glassfish for instance)
So, I wonder:
How blueprint-maven-plugin handles #Stateful, #Stateless annotations
and #ApplicationScoped, #SessionScoped, #RequestScoped etc. on beans
while producing blueprint file
Also, say I have a new code to write which would use CDI and I want that to be also deployable to Karaf. How should I then write
that piece? Should I avoid #Stateful, #Stateless annotations ?
How those annotations (if they for any reason irrelevant for the case when I deploy to OSGi) would be interpreted by that OSGi
container (Karaf) since those annotations DO present in the code ?
After a quick scan on the maven-blueprint-plugin source code. Seems like EJB annotations Stateless,Stateful and *Scope isn't handled by the plugin.
#Stateless and #Stateful isn't really part of CDI but an extension to it (EJB). If you really love what EJB do, consider using the OpenEJB feature in karaf. You should be able to do #Inject without these annotation as long as that is a valid bean.
Annotations are just a marker to source code (in other word, they don't do anything), it doesn't do anything unless there's a processor registered to handle them.
E.g. you can instantiate an EJB with new keyword and perform unit test on it (fulfill all injection with setter of course)
Disclaimer:
I don't really use this plugin myself, and there might be a newer version of plugin support these ejb annotations.
we face against the same problem than the one identified here : Weld using alternative producer from src/test/META-INF/beans.xml.
With maven and weld se, for test, weld does not use the beans.xml located in test/resources/META-INF.
We look on Google, and i the Jira issues of Weld, but it seems that others people do not have the same problem than us.
So does someone know how to use weld in Junit test with Maven and to achieve that Weld SE is configured by the beans.xml located in test/resources/META-INF ?
Weld itself cannot help you with this, Weld SE will in fact pick the first beans.xml on classpath. To be more precise, main/resources/META-INF/beans.xml will be used for bean archive main/java/ while test/resources/META-INF/beans.xml willbe used for bean archive test/java/. Therefore, mixing this up would be unwise at best and so Weld does not support it by default.
Anyhow, there are several options to solve your trouble.
Use Arquillian + Shrinkwrap
For CDI testing, this is the best way you can hope for and a very good one, once you learn it. Shrinkwrap will allow you to tailor the deployments exactly to your needs, including only the classes you want and also beans.xml you want. There is an Arquillian container for Weld SE which is even used in Weld SE testsuite, so you can inspire yourself there.
Disable discovery while bootstrapping Weld SE
Somewhere in your unit tests, you are starting Weld container. While doing so, you might use Weld#disableDiscovery(), which means you will create a synthetic bean archive. In such archive, default discovery is disabled and only things you specifically add (via methods such addBeanClasses, addPackage, ..) will land in the resulting archive. And since you ahve n odiscovery, no beans.xml will be picked up! Instead, you define all the alternatives, interceptors, ... (again there are methods for this on the Weld class). Now, I imagine this could be "easily" placed into some #Before method should you need to do this repeatedly. Alternatively, you could also use Weld-junit which will provide you with a JUnit #Rule and allows you to easily describe the deployment on per class basis.
Both of the above approaches should help you with that hardships. But as I said, there is no way to achieve this with just plain Weld SE.
I want to integrate vaadin 7 with osgi but there is no longer any AbstractApplicationServlet class.
I've followed the integration using the vaadin bridge by Neil Bartlett https://github.com/njbartlett/VaadinOSGi.
I've tried using the VaadinServlet somehow but with no luck.
I've also searched for other solutions but found none.
Do you have any suggestions?
Thanks
Vaadin 7 has a lot of design changes that are not visible for default use cases, but especially for OSGi integration you have to do some extra work.
To get you started you should try to understand the initialization process concerning the classes:
VaadinServlet, VaadinServletService, VaadinSession and UIProvider.
The problematic parts are the methods that use classname parameters as arguments, you will have to work around this by e.g. implementing a factory that directly injects your instances.
If you look at the source for UIProvider.createInstance(..) you can see that the original implementation tries to create a new instance, this will fail since vaadin does not see your classes in OSGi. The same principle applies for the other classes i mentioned as well.
Another thing you have to look at is the new separation of jars in Vaadin 7.
A good approach is to attach a bundle fragment with a blueprint context. that registers a BundleHttpContext the same way it worked in Vaadin 6. Attaching fragments also works for the themes you want to use.
Sorry that i can't provide a turnkey solution, but i hope this helps you to look into it yourself.
I would like to set up an infrastructure for integration testing.
Currently we bootstrap tomcat using maven and then execute httpunit tests.
But the current solution has few drawbacks.
Any changes committed to the database need to be rollback manually in the end if the test
Running code coverage on integration test is not straight forward (we are using sonar).
My goals are:
Allow automatic rollback between tests (hopefully using String #transaction and #rollback)
Simple straight forward code coverage
Using #RunWith that will bootstrap the system from JUnit and not externally
Interacting with live servlets and javascript (I consider switching from httpuinit to selenium…)
Reasonable execution time (at least not longer than the existing execution time)
The goals above look reasonable to me and common to many Java/J2ee projects.
I was thinking to achieve those goals by using Arquillian and Arquillian Spring Framework Extension component.
See also https://github.com/arquillian/arquillian-showcase/
Does anyone have and experience with Arquillian and with Arquillian Spring Framework Extension?
Can you share issues best practices and lesson learned?
Can anyone suggest an alternative approach to the above?
I can't fully answer your question. only some tips
Regarding the automatic rollback. In my case. Using liquibase to init the test data on "hsqldb" or "h2" which could be set as in-memory pattern. Then no need to roll back.
For Arquillian. It's a good real testing approach. What i learned is that "Arauillian Spring Framework Extension" is just a extension. You have to bind to a specific container like "jboss, glasshfish,tomcat" to make the test run.
But i don't know how to apply for a spring-based javaSE program which do not need application server support.
My lesson learned is the jboss port conflict. since jboss-dist is set 8080 as default http port. But our company proxy is same as 8080. So i can't use maven to get the jboss-dist artifact.
Hope others can give more info.
I am trying to write a JAX WS client for a service exposed using Axis2, WebSphere8, Java 1.6.
Standalone client(i.e. client running in my local machine) works fine but when I deploy the client in a application running in same websphere server I get
java.lang.ClassCastException: Cannot cast class org.apache.axis2.jaxws.spi.Provider to class javax.xml.ws.spi.Provider
at line OpenPortType service = OpenService
.create(wsdlFile.toURL(),
new QName( "http://www.test.com/schemas/public/open-api/Open/","OpenService")).getPort(
OpenPortType.class);
When I tried to google I found similar problem existed in weblogic : https://wso2.org/jira/browse/CARBON-4835
When we see source of axis2.jaxws.spi.Provider class we come to know that it's a subclass of javax.xml.ws.spi.Provider !!
I'm wondering what could be wrong ? Any idea ?
Unless you are calling Axis2 capabilities directly, rather than simply using JAX-WS APIs, you do not want to package Axis into your EAR. WebSphere does provide its own JAX-WS implementation which I'm not surprised conflicts with another JAX-WS implementation you've deployed in your app. (In particular, note that WebSphere's own implementation is based on Axis2.)
If you do need to deploy a different implementation, you'll probably have to at least adjust your WebSphere classloader policy to parent_last. There might be more to do as well; it's been a while since we did this ourselves. But it's much easier and cleaner to use the built-in JAX-WS implementation, which means not deploying any of those jars at all.