Integration Testing Distributed Java EE Applications - spring

We are having a setup of 3 different Java EE Servers, all communicating with both JGroups and RMI. We are heavily unit testing our code and the whole team is totally in favor of TBD, but we are facing problems when it comes to integration testing our servers.
Especially our custom fail-over/ reconnect/ termination detection "algorithms" would need some automated testing because we are often seeing that they break and we currently always fix it by trial and error testing.
We are using the following libraries/frameworks: Tomcat, Maven, Spring 3, RMI, JGroups
Any ideas, suggestions, links and resources are welcome!

Interesting that nobody answered this question since 2011. Maybe there wasn't anything to recommend?
If you are looking into integration testing only it's much easier. You can write your usual JUnit/TestNG tests and use arquillian to take care of the container (lifecycle, deployments, configuration, etc). You can run all the components (tests, containers, deployments) on a single node, bind to different IPs or ports, let JGroups do all the cluster communication as usual.
http://arquillian.org/
Moreover, there is a whole book now available about integration testing in called 'Continuous Enterprise Development in Java'.
http://www.amazon.com/Continuous-Enterprise-Development-Andrew-Rubinger/dp/1449328296
The situation is IMO much worse when it comes to system testing. I am going to just say one name here: SmartFrog which is 'powerful and flexible Java-based software framework for configuring, deploying and managing distributed software systems'. The learning curve is terrible though.
http://www.smartfrog.org/

Related

SpringBoot with Jetty Vs Core Java with OSGI Jetty

My project has requirement to deploy a Java Based application as an operating system Job (and not use any container). The application need to have following capabilities:-
Scheduling
Few HTTPS based services
Ability to make JMX calls
Storage: Data for last 5 to 10 minutes of transactions (not more than 600 rows X 20 columns). Something like embedded H2 or in-memory options
Decision Tree: Something like Drools..
My manager wants to write this application as a core Java with OSGized Jetty version. I am suggesting to use Spring Boot with embedded Jetty(which will give me ready to use capabilities for Scheduling, JMX Integration and REST Services).
His bend towards core Java is emerging from the requirement that this application needs to be extremely efficient, fast and self-contained. He wants to reduce dependency on any open source. I have never worked directly on OSGI but have used products coming out of it - like eclipse.
Can somebody guide how OSGI based development might benefit over SpringBoot?
For many people, OSGi is superfluous, because they don't see the value in being modular. Not being worth the trouble.
Think about the application lifecycle, more or less being plan-develop-test-deploy.
How many developers you have? If many, OSGi helps a lot, because being modular make the boundaries very clear. You can delegate things very easily.
If outsourcing is your thing, you can just handle the module APIs and tell them to develop against it. They will never know how the rest was implemented, no fear of secrets being leaked.
Unit tests are so easy. You obviously see what you can test, every else you mock/stub/spy/fake. Unit tests can be can be reused in Integration tests, of course that isn't news, but the trick is running Unit tests outside the OSGi container, and Integration tests inside. So if you decide OSGi was not worth it, your code stills works fine (unit tests being the proof).
You can make your app a collection of modules, and every module having independent versioning and source repositories. Makes easier to handle and find bugs. For example, the current app crashed, you find out that sub-module-1.2 is throwing errors, try with version sub-module-1.1(still bad), then version 1.0(good), bug was introduced in 1.1 (avoids bisecting the source code). Programmers don't need to be perfectly synchronized with each other if they are working in different modules.
How do you plan to update the app? Most frameworks are of the all-or-nothing approach, where you have to stop the world, update, then restart the app. If you make things modular, you just need to update that thing. Making the downtime very small, and sometimes even zero.
If you need to make a big change in your app, but can't afford to refactor everything right now. With OSGi you can run the system with both my-module-1.0 and my-module-2.0. You can even adapt my-module-1.0 to redirect calls to my-module-2.0, but that is a kind of last resort hack (just saying that you can, if you want to).
I can do everything you say without OSGi, right? Well, probably you can, but in the end, would be something like OSGi.
I love the Dependency Injection of my framework. No problem, OSGi have something like that.
I hate Dependency Injection, it kills my app perfomance. No problem, you can use something like osgi.getService(MyService.class);. The OSGi container isn't concerned about intercepting every call of your app.
OSGi is like Java++, Java plus modules.
You can mix Spring Boot with OSGi, can't say if this is good or bad. There are many libraries and frameworks that fit your list, many will work out-of-the-box with OSGi.

Application Server for non-Web Spring/Hibernate Application

We are developing a open source trading platform based on Springframework and Hibernate http://code.google.com/p/algo-trader/ and http://www.algotrader.ch. The application consists of a trading framework and several strategies that can be started independently. So far, these different parts have been running in separate JVM's communicating through RMI and JMS.
To avoid unnecessary serialization and network overhead we would like to run the entire application within some sort of container (potentially an application server). We do however have the requirement, that the individual parts of the application can be deployed, started and stopped independently.
We have looked into OSGi, but a lot of the libraries that we use are not OSGi ready yet, so this is not currently an option. Also please note, there is no web-GUI in our application.
Any suggestions on this?
Thanks
Andy
If OSGI is not an option then functionality can be broken into smaller units and then deploy them as utility jar, if deployed as utility jar they can be managed independently.
For application server I feel either glassfish or Jboss will be a good option considering they are open source and free.
Though at a later point in time you can check with Weblogic (Dev free).
So in your case you would like to break the static data configuration(Counterparty, Currencies), Dealing(Pricing, Quoting, Booking) as two separate feature.
For your choose of an application server i advise you Jboss and specially in his version 7.1 which is faster and more stable!

OSGi in Distributed Infrastructures

We're working on an OSGi-based infrastructure for processing stream-based data flows. Specific processing tasks are executed by individual OSGi components. We now need the possibility to distribute those components over different machines, which means, we need some kind of communication mechanism between OSGi components/containers.
During my research I came across different potential solutions: R-OSGi, Apache CXF for Distributed OSGi, Eclipse Communication Framework.
ECF seems particularly interesting as it supports different transports formats and provides support for stuff like service discovery.
My central questions:
Are there any detailed tutorials/walk-throughs for setting up an ECF infrastructure within Felix? (from my research, I found, that Felix support has been added recently)
Are there any solutions besides the three listed above which I might have missed?
Is there a reason for taking Apache CXF instead of ECF?
The first question -- whether there is a detailed walk-through for setting up ECF with Felix -- I don't know the answer to, though one might use a search engine to find out combinations of those terms.
The problem is ECF uses the Equinox infrastructure, and has at times inadvertently relied on packages that are non-public through transitive dependencies (particularly the Runtime API which uses Equinox for non-public debugging). This, in turn, means that ECF relies on a whole host of other components to be available and it's this set which typically isn't well defined on a Felix runtime.
You have missed out Paremus' Service Fabric, which is a commercial OSGi cloud solution. I'm not sure if you were specifically focussing on open-source or not; but if you are including commercial licenses then they have a very robust architecture for remote services.
Finally, the Apache CXF over ECF question -- if you're using Felix, I'd argue that going with Apache CXF is probably easier than going with ECF. This is mainly due to the dependency set and getting it working, combined with the fact that ECF may not be tested on Felix and so may assume particular aspects of the Equinox runtime (which includes, for example, the runtime's parent classloader delegation to pick up things on the boot classpath). This isn't really the fault of ECF per se, but rather an artefact of how the Eclipse ecosystem works.
If you want to communicate with non-OSGi runtimes, there's an advantage in the Apache CXF in that they can generate WDSL for interaction with other languages. I believe that you can do the same thing in ECF with a bit more work. The CXF solution is likely to be more verbose than a corresponding ECF one (WSDL always is) but if you're not using high volumes of requests this isn't likely to make a significant difference.

Just how scalable is Grails?

I'm looking to make a website that will probably get some heavy, repetitive traffic. Is grails up to the task?
I agree with lael, also because it's built on java technologies there are a lot of proven clustering and 'enterprisey' tools available which allow you to easily scale across multiple application services.
The cloud tools around Grails are also becoming very good and make deploying to a cloud like EC2 very easy. I've recently been using Cloud Foundry and found it very good.
As the first poster points out however, you can write a badly performing application in any framework/language. One thing I'd recommend is getting a good understanding of Hibernate which is the underlying persistence library. If you understand how that works, it should help you avoid making any silly mistakes at the DB level. On this side of things, a tool like p6spy is great for checking what the database is up to during normal use. It should help you spot any repetitive queries.
The scalability of your web application won't really depend on what language/framework you choose to use, but rather how your application is built. You can build a scalable web application in Grails, just as you can build an incredibly slow application in C++. If Grails is the framework you would like to use, then use it; you can always rewrite the slow parts in Java or another fast language, if need be. (After all, that's what Twitter did with Scala.)
Disclaimer: I've never actually used Grails.
Grails is essentially a thin layer on top of the Spring Framework, which many consider to be a very scalable framework in the enterprise world. Spring + Hibernate has become a standard in many Java shops around the globe.
If you run into performance bottlenecks in Groovy, you can always rewrite those parts in Java.
Take a look at the Success Stories for examples of sites that were written in Grails. The Testamonials are also a good place to look for examples. You will use a little more memory(heap and permgen) than a vanilla Java app, but you can tune it just like you would any other Java application.
On the low end you aren't going to find $3/month Hosting options that you could with PHP stack (for example). That said, there are some good caching solutions for Grails apps EhCache, MemCache, etc. Beyond that you can also setup an Apache layer to caches static resources or whatever you need.
Don't mean to pile on here. You've already got some great answers but I just want to add on thing that I was reminded of recently. Scalability depends not only on the software you write (regardless of language/framework) but also on the deployment environment. A very well written application deployed on an undersized or poorly configured server will not scale at all. If you do use Grails or any other Java based framework, the default settings on your container (Tomcat, JBoss, etc.) will probably not be what you need.
Just something to keep in mind,
Dave
Grails run on the JVM. Simply put, you will not find a more scalable, solid and robust runtime platform than the JVM, anywhere. That's Grails's big advantage over, say, PHP or RoR.

How does OSGi manage interaction of components running in separate JVMs?

I have been trying to understand a bit more about the wider picture of OSGi without reading thru the entire specification. As with so many things, the introduction to what OSGi actually is was probably written by someone who had been working on it for a decade and perhaps wasn't best placed to put themselves in the mindset of someone who knows nothing about it :-)
Looking at Felix's example DictionaryService, I don't really understand what is going on. Is OSGi a distinct instance of a JVM into which you load bundles which can then find each other?
Obviously it is not just this because other answers on StackOverflow are explicit that OSGi can solve the dependency problem of a distributed system containing modules deployed within distinct JVMs (plus the FAQ keeps talking about networks).
In this latter case, how does a component running in one JVM interact with another component in a separate JVM? Can the two components "use" each other as if they were running within the same JVM (i.e. via local method calls), and how does OSGi manage the marshalling of data across a network (do you have to use Serializable for example)?
Or does the component author have to use some other distinct mechanism (either provided by OSGi or written themselves) for communication between remote components?
Any help much appreciated!
Yes, OSGi only deals with bundles and services running on the same VM. However, one should note that it is a distinct feature of OSGi that it facilitates running multiple applications (in a controlled way and sharing common modules) on the same JVM at all.
When it comes to accessing services outside the clients JVM, there is currently no standardized solution. Paremus Infiniflow and the derived open-source project Newton use an SCA approach. The upcoming 4.2 release of the OSGi specs will address one side of the problem, namely how to use generic distribution software in such a way that it can bring remote services into the client's JVM.
As somebody mentioned R-OSGi, this approach also deals with the other side of the problem, being how to manage dependencies between distributed OSGi frameworks. Since R-OSGi is not generic distribution software but explicitly deals with the lifecycle issues and dependency management of OSGi bundles.
As far as I know, OSGi does not solve this problem out of the box. There are OSGi-bundles, for example Remote OSGi, which allow the programmer to distribute services across a network.
Not yet, i think it's being worked on for the next release.
But some companies have already implemented distributed osgi. One i'm aware of is Paremus' Infiniflow (http://www.paremus.com/products/products.html). At linkedin they are also working on this. More info here: Building Linkedin next gen architecture with osgi and here: Matt raible: building linkedin next gen architecture
Here's a summary of the changes for OSGI 4.2: Some thoughts on the OSGi R4.2 draft, There's a section on RFC-119 dealing with distributed OSGi.
AFAIK, bundles are running in the same JVM, but are not loaded using the same class loader (that why you can use two different versions of the same bundle at the same time).
To interact with components in another JVM, you must use a network protocol such as rmi.
The OSGi alliance is working on a standard for distributed OSGi:
http://www.osgi.org/download/osgi-4.2-early-draft2.pdf
There even is an early Apache implementation of this new standard:
http://cxf.apache.org/distributed-osgi.html
#Patriarch24
The accepted answer to this question would seem to indicate otherwise (unless I'm misreading it). Also, taken from the FAQ:
The OSGi Service Platform provides the functions to change the composition dynamically on the device of a variety of networks, without requiring a restart
(Emphasis my own). Although in the same FAQ it describes OSGi as in-VM.
Why am I so confused about this? Why is such a basic question about a decade-old technology not clear?
The original problem of OSGI was more related to distribution of code (and then configuration of bundle) than to distribution of execution.
People looking at distributed components are rather looking towards SCA
The "introduction" link is not really an intro, it is a FAQ entry. For more information, see http://www.osgi.org/About/WhatIsOSGi Not hard to find I would think.
Anyway, OSGi is an in-VM SOA. That is, the OSGi Framework is about what happens inside the VM, it provides a framework for structuring your application inside the VM so you can built it too a large extent from components. So the core has nothing to do with distribution, it is completely oblivious of who implements the services, it just provides a mechanism for modules to meet each other in a loosely coupled way.
That said, the µService model reifies the joints between the modules and it turns out that you can build support on top of the framework that provides distribution to the other components. In the last releases we specified some mechanisms that make this standardized in the core and provide a special service Remote Service Admin that can manage a distributed topology.
If you are looking for a distributed OSGi centric Cloud runtime - then the Paremus Service Fabric ( https://docs.paremus.com/display/SF16/Introduction ) provides these capabilities.
One or more Systems each consisting of a number of OSGi assemblies (Blueprint or Declarative Services) can be dynamically deployed and maintained across a population of OSGi runtime Frameworks (Knopflerfish, Felix or Equinox).
A light weight RSA remote framework is provided which provides Service discovery by default using DDS (a seriously good middleware messaging technology) - (thought ZooKeeper and other approach can be used). Currently supported re-moting protocols include RMI and Avro.
Regards
Richard

Resources