Currently in production we have Domino 8.5.3 running and a developed osgi dots tasklet running communicating with a JMS Queue.
We are about to upgrade the Domino platform to Domino 9 and I was wondering if there are any caveats or any general advice about the specific osgi dots tasklet I need to consider before upgrading?
I have posted this in ServerFault as well for the server part and this question is to be considered for programming…
I can confirm it works after upgrading to Domino 9
Related
Can a Java 6 application work on Webpshere 8.5 if the nodes are built using Java 7? I have an EAR which was developed using Java6 and deployed on Websphere 8.5 using EA but the Webservice always gives 404.
Thanks.
It appears that that no support is provided for Java SE 6 in that version of WebSphere.
From the online documentation for WebSphere 8.5.5 (not sure if this is the version you are using tho')
Notice: Java SE 6 is being removed from service. Java SE 8 is the
recommended Java SDK because it provides the latest features and
security updates. You can continue to use Java SE 6, but no service
can be provided after the end of the support date, which might expose
your environment to security risks.
In fact, it appears that not even Java SE 7 is supported in this version of WebSphere either
There is a confusion here about the meaning of "version" and how it applies to JavaEE applications:
There is the version of the JVM which is selected to run the server.
There is the JavaEE specification level used to encode XML documents (web.xml, application.xml, ejb-jar.xml, etc) which are within an application.
There is the JavaEE specification level which is supported by the server.
There is the java compiler level which is set for classes packaged within an application.
There is, technically speaking, no specific version associated with an application. That an application is at Java 7 can mean "the classes of the application were compiled to java7", or, "the XML descriptors are set to the versions available to JavaEE7", or, "the function of the application requires a container which supports JavaEE7".
A key detail is that when running with WebSphere, its the server which decides to which JavaEE specification the application is run, not any feature of the application.
I'm guessing that in the original question, "Java 6 application" means the application was compiled to java6 and that the application features are limited to those available in java6. That should work on all of WebSphere v7.0, v8.0, v8.5, and v9.0 (at all service levels).
There are some complications to consider when using a distributed topology (having a DMGr node and more than one application server nodes). A frequent complication is that one or more of the application server nodes is at a lower version than the DMgr node. This is a supported scenario (with some limits on how big of a version difference is supported). The scenario is typical when a topology (a collection of federated nodes) is being migrated gradually from a particular WebSphere version to a higher version, and during the migration a mixture of node versions is available. When this is the case, the DMgr tracks the version of the application server nodes and constrains processing of the application to ensure the deployment is valid to all of the application server nodes to which the application is deployed.
Since the JavaEE level is set by the application server version, and since, generally, higher versions of the application server implement higher JavaEE levels, applications can function differently when migrated between application server versions. Whether this is the case for this question cannot be known without looking in more detail at the exact failure which is occurring.
I have a simple JSP/Servlet maven application which allows a user to upload an archive file. The application will then unzip the archive which contains XML files, and parse them using basic SAX parsing. It will generate an in-memory representation of these files, and write them to a Neo4J Graph Database, currently in embedded mode.
During development, I used a GlassFish v3 but with production in sight, the request has been made to move from Glassfish to Tomcat and so I did. Apart from a few small issues with Tomcat forcing me to add JSF dependencies despite the fact that I'm not using any JSF, there is one big issue I have with Tomcat atm.
The largest testfile I have takes about 8 seconds to upload and parse on glassfish v3. After that, it takes about 2 seconds less, due to the fact that I don't clean up the uploaded file (yet).
The same file on Tomcat7 takes about 90 seconds to upload and parse the first time. The other times it takes about 20 seconds less, presumably because of the same reason.
In any case, there's a difference in performance of factor 10. I'm a little bit surprised, since I thought that using Tomcat would actually increase the speed due to it being more lightweight than Glassfish, since I'm not really using the advanced functionalities provided by Glassfish.
Has anyone encountered a similar issue, and what did you do to resolve this? Is this even resolvable, or is it due to the way that Tomcat works...
EDIT: The difference appears to be in the code section that is responsible for writing the in-memory representation of the files to the actual database... No idea why though...
I could not find a comparison of Tomcat with Glassfish but yes, the new Glassfish versions are very light weight and have very good performance. I have experienced the same. I guess running an application server instead of a Tomcat is no more huge administration and hardware waste (and you can use light weight EJB 3 and 3.1 if you like). Glassfish installations can be very small in size if you only select the necessary modules.
Check this page. It compares Jboss, Glassfish and Resin
http://hwellmann.blogspot.com/2011/06/java-ee-6-server-comparison.html
And this one compares Glassfish 3.1 and Jboss 6 & 7.
http://hwellmann.blogspot.com/2011/10/jboss-as-7-catching-up-with-java-ee-6.html
We have here an application developed using Java EE 5 stack (using JSF, RichFaces, EJB, JPA, Hibernate, JAAS) that runs inside Glassfish 3.1! The thing is we are in need to run it as an installable deploy (actually many deploys =]).
My question is: What can we do to have the smallest footprint for the system?
I've already studied about:
uninstalling thing through upgrade tool (e.g. the admin parts),
run the application using embedded glassfish (but using the already existent domain),
configuring domain.xml to erase features (but at a trial and error way),
found some work on how to configure glassfish for production environment.
But as the system will be used by one user at a time, I would like to listen from you about options in this environment.
We are having a setup of 3 different Java EE Servers, all communicating with both JGroups and RMI. We are heavily unit testing our code and the whole team is totally in favor of TBD, but we are facing problems when it comes to integration testing our servers.
Especially our custom fail-over/ reconnect/ termination detection "algorithms" would need some automated testing because we are often seeing that they break and we currently always fix it by trial and error testing.
We are using the following libraries/frameworks: Tomcat, Maven, Spring 3, RMI, JGroups
Any ideas, suggestions, links and resources are welcome!
Interesting that nobody answered this question since 2011. Maybe there wasn't anything to recommend?
If you are looking into integration testing only it's much easier. You can write your usual JUnit/TestNG tests and use arquillian to take care of the container (lifecycle, deployments, configuration, etc). You can run all the components (tests, containers, deployments) on a single node, bind to different IPs or ports, let JGroups do all the cluster communication as usual.
http://arquillian.org/
Moreover, there is a whole book now available about integration testing in called 'Continuous Enterprise Development in Java'.
http://www.amazon.com/Continuous-Enterprise-Development-Andrew-Rubinger/dp/1449328296
The situation is IMO much worse when it comes to system testing. I am going to just say one name here: SmartFrog which is 'powerful and flexible Java-based software framework for configuring, deploying and managing distributed software systems'. The learning curve is terrible though.
http://www.smartfrog.org/
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am new to programming with OSGI. Can anyone provide me a working example of a client/server osgi service invocation.
I have been trying to acheve this for the last 2 weeks without any success.
My service is being discovered and executed by an eclipse instance in the same machine, but when I try the same thing from another machine it fails.
Any help will be appreciated.
Thanks.
In OSGi platform (Release 4 Version 4.1) services discovered through OSGi service registry are local services available only inside single OSGi framework instance (i.e. single JVM). You can not expect to execute an OSGi service running on a different machine.
If you want to call OSGi services across multiple framework instances (i.e. multiple JVMs / multiple machines) you should take a look at Distributed OSGi Specification (RFC 119) that will be part of upcoming OSGi specification (Release 4 Version 4.2) with CXF as a reference implementation.
Update: Another way to call remote OSGi services is to use R-OSGi. It is a middleware that provides an almost transparent way to access services on remote OSGi platforms.
OSGi services are intra-vm, not inter-vm, unless you add distributing on the top.
You might want to look at Brian's tutorial which does a good job of showing how OSGi services can be exported and use ECF to perform the remote distribution. There's quite a few bundles involved but he does a good job in explaining it.
Unless you are playing with either CXF's or Eclipse's Distributed OSGi implementations, there's nothing related to remoting in OSGi. You should be able to make any remoting implementation work between 2 OSGi-based processes.
What I will say is you will probably have class loader issues if you try and use RMI or any of the RPC patterns available in Spring's remoting. This is solvable, but requires a good understanding of OSGi and class loaders.
Does your code work if you run it outside of OSGi? Are you using a firewall? Can you run any network-based service on your PC that is visible to other PCs on the network?
As described, the problem looks more network-related than OSGi-related.
Also, you didn't mention what failure you get when running across different PCs.
The Riena platform of the eclipse foundation provides OSGi remote services by publishing the services as web-service endpoints.
Maybe the answers should be updated, since they are not valid anymore.
Now there is OSGi Remote Services available You can read about it in the OSGi Enterprice Specification Chapter 100. There are two main implementaions: Eclipse ECF and Apache CXF. There is a good example for ECF here