EJB and internationalization (i18n) - internationalization

I am aware that according to the EJB spec one must not access files from within EJBs by using java.io.*.
In this SO answers Process files in Java EE there are a couple of ways to bypass this restriction which all come with rather plenty of efforts. Anyway I do understand the reasons behind this design decision (i. e. scalability, clustering).
My question is, does that mean I can not use i18n-(property)-files within EJBs (which are bundled in the jar)?

Related

Why have coding over configuration at all?

When we talk about spring (which ever module say jdbc), one of the reasons we use it is because it enables dependency injection and controls lifecycle of beans/classes. In programming, one of the most important fundamental is to code for interfaces rather than implementations, so today if I am using sql server driver v1, I can change it to v2 tomorrow if my code is written in such a way that it cares about Driver interface and not the implementations, then in what case would I ever need coding over configuration ?
The wording of your question seems a bit strange to me. Perhaps you are asking if there are any drawbacks to using Spring-like dependency injection. I can think of a few drawbacks, but whether these drawbacks outweigh the potential benefits of Spring is a matter of opinion.
Unfortunately, a Spring XML file is much more verbose than code to achieve similar (but hard-coded) initialisation of objects.
A programmer has to look not just at code but also at a Spring XML file to figure out what is going on. This, arguably, is a form of the Yo-yo problem.
One significant benefit of Spring is that it can be used to instantiate and configure any Java class (assuming the classes provide getters and setters). In particular, Java classes do not need to be polluted with the need to inherit from framework infrastructure classes. If you don't mind polluting classes with the need to inherit from framework infrastructure, then it is possible to have much more concise configuration files for instantiating and configuring objects. A case study illustrating this idea can be found in Chapters 9, 10 and 11 of the Config4* Practical Usage Guide. I am not proposing that the approach used in that case study be used for all applications, but I think it is a good approach to use when there is a complex, standardised API (such as for JMS) that is implemented by multiple products. In the case study, the approach results in a significantly easier-to-use API and eliminates some potential bugs from applications. Spring doesn't offer such benefits.
Section 9.4.2 of the Config4* Practical Usage Guide outlines a 9-step initialisation process for typical JMS applications. The framework library discussed in the case study ensures that those 9 steps are carried out in the correct order. It has been years since I looked at Spring so I might be wrong, but I don't think Spring has the flexibility to (easily or perhaps at all) enforce such a complex 9-step initialisation mechanism.

OSGi in Distributed Infrastructures

We're working on an OSGi-based infrastructure for processing stream-based data flows. Specific processing tasks are executed by individual OSGi components. We now need the possibility to distribute those components over different machines, which means, we need some kind of communication mechanism between OSGi components/containers.
During my research I came across different potential solutions: R-OSGi, Apache CXF for Distributed OSGi, Eclipse Communication Framework.
ECF seems particularly interesting as it supports different transports formats and provides support for stuff like service discovery.
My central questions:
Are there any detailed tutorials/walk-throughs for setting up an ECF infrastructure within Felix? (from my research, I found, that Felix support has been added recently)
Are there any solutions besides the three listed above which I might have missed?
Is there a reason for taking Apache CXF instead of ECF?
The first question -- whether there is a detailed walk-through for setting up ECF with Felix -- I don't know the answer to, though one might use a search engine to find out combinations of those terms.
The problem is ECF uses the Equinox infrastructure, and has at times inadvertently relied on packages that are non-public through transitive dependencies (particularly the Runtime API which uses Equinox for non-public debugging). This, in turn, means that ECF relies on a whole host of other components to be available and it's this set which typically isn't well defined on a Felix runtime.
You have missed out Paremus' Service Fabric, which is a commercial OSGi cloud solution. I'm not sure if you were specifically focussing on open-source or not; but if you are including commercial licenses then they have a very robust architecture for remote services.
Finally, the Apache CXF over ECF question -- if you're using Felix, I'd argue that going with Apache CXF is probably easier than going with ECF. This is mainly due to the dependency set and getting it working, combined with the fact that ECF may not be tested on Felix and so may assume particular aspects of the Equinox runtime (which includes, for example, the runtime's parent classloader delegation to pick up things on the boot classpath). This isn't really the fault of ECF per se, but rather an artefact of how the Eclipse ecosystem works.
If you want to communicate with non-OSGi runtimes, there's an advantage in the Apache CXF in that they can generate WDSL for interaction with other languages. I believe that you can do the same thing in ECF with a bit more work. The CXF solution is likely to be more verbose than a corresponding ECF one (WSDL always is) but if you're not using high volumes of requests this isn't likely to make a significant difference.

What makes the Spring framework a lightweight container?

When people mention that Spring is a lightweight containter compared to other frameworks, do they mean? That it occupies less memory in the system or it does not have the operations like start stop that we have for EJB containers and it doesn't use a special container?
What makes Spring a lightweight container?
Whether it is "lightweight" or "heavyweight", it is all about comparison. We consider Spring to be lightweight when we are comparing to normal J2EE container. It is lightweight in the sense of extra memory footprint for the facilities provided (e.g. Transaction Control, Life Cycle, Component dependency management)
However, there are sometimes other criteria to compare for the "weight" of a container, e.g. intrusiveness in design and implementation; facilities provided etc.
Ironically, Spring is sometimes treated as heavy weight container when compared to other POJO-based container, like Guice and Plexus.
Spring calls itself 'lightweight' because you don't need all of Spring to use part of it. For example, you can use Spring JDBC without Spring MVC.
Spring provides various modules for different purposes; you can just inject dependencies according to your required module. That is, you don't need to download or inject all dependencies or all JARs to use a particular module.
If you want to run a Java EE application, you can't just create a small application that will run on its own. You will need a Java EE application server to run your application, such as Glassfish, WebLogic or WebSphere. Most application servers are big and complex pieces of software, that are not trivial to install or configure.
You don't need such a thing with Spring. You can use Spring dependency injection, for example, in any small, standalone program.
I think "lightweight" is mostly a buzz-word. It's meaning is highly subjective and based on context. It can mean "low memory footprint", it can be low execution overhead, low start-up overhead. People also use it to differentiate between some perceived level of complexity and/or learning-curve. In any case, it's assuredly relative as there is no defined point on any scale where "light" becomes "heavy" in terms of "weight".
I personally think it's a dangerous word since it has no real, quantifiable meaning. It's something people throw into architecture proposals to beef up the "pro" section of a certain framework they want to use anyway. If you see or hear it being used in any such situation, it's a perfect opportunity to ask "what does that mean?". If you get an angry or frustrated response (combined with rolling of eyes and shaking of head), it means that the person has decided on a certain architecture, but hasn't managed to formulate coherent or objective reasons for it.
EDIT: not sure I would categorize spring as a "container" either, but that's a similar apples and oranges discussion. I'd call it a framework.
Spring is light weight becouse other J2ee container especially EJB2.1 require more configuration, It can have lot of do nothing code to ,it have complex directory structure for packing applications, overall it took extra memory;on other hand spring minimizes all this things.so it light weight.
I think one can also say that spring is light weight because it uses POJO(Plain old java object) .POJO class does not require to implement,extends technologies specific API(Interfaces,Classes) or it is not bounded to any technology specific API

When is Spring + Tomcat not powerful enough?

I've been reading/learning more about Spring lately, and how one would use Spring in combination with other open-source tools like Tomcat and Hibernate. I'm evaluating whether or not Spring MVC could be a possible replacement technology for the project I work on, which uses WebLogic and a LOT of custom-rolled Java EE code. The thing is, I've always suspected that our solution is over-engineered and WAY more complex than it needs to be. Amazingly, it's 2009, and yet, we're writing our own transaction-handling and thread-pooling classes. And it's not like we're Amazon, eBay, or Google, if you know what I mean. Thus, I'm investigating a "simpler is better" option.
So here's my question: I'd like to hear opinions on how you make the decision that a full-blown Java EE application server is necessary, or not. How do you "measure" the size/load/demand on a Java EE app? Number of concurrent users? Total daily transactions? How "heavy" does an app need to get before you throw up your hands in surrender and say, "OK, Tomcat just isn't cutting it, we need JBoss/WebLogic/WebSphere"?
I don't think that the decision to use a full-fledged Java EE server or not should be based on number of users or transactions. Rather it should be based on whether you need the functionality.
In my current project we're actually moving away from JBoss to vanilla Tomcat because we realized we weren't using any of the Java EE functionality beyond basic servlets anyway. We are, however, using Spring. Between Spring's basic object management, transaction handling and JDBC capabilities, we aren't seeing a compelling need for EJB. We currently use Struts 2 rather than Spring's MVC, but I've heard great things about that. At any rate, Spring integrates well with a number of Java web frameworks.
Spring does not attempt to replace certain advanced parts of the JavaEE spec, such as JMS and JTA. Instead, it builds on those, making them consistent with the "Spring way", and generally making them easier to use.
If your application requires the power of the likes of JMS and JTA, then you can easily use them via Spring. Not a problem with that.
Google open sources a lot of their code. If you're writing low-level things yourself, instead of implementing code that's already written, you're often overthinking the problem.
Back to the actual question, Walmart.com, etrade.com, The Weather Channel and quite a few others just use Tomcat. Marketing and sales guys from IBM would have you believe different, perhaps, but there's no upper limit on Tomcat.
Except for EJB, I'm not sure what Tomcat is missing, and I'm not a fan of EJB.
What tomcat does not offer apart from the more exotic elements of Java EE is session beans (aka EJBs). Session beans allow you to isolate your processing efficiently. So you could have one box for the front end, another for the session beans (business logic) and another for the database.
You would want to do this for at least 2 reasons:
Performance; You're finding that one box to handle everything is loading the box too much. Separating the different layers onto different boxes would allow you to scale out. Session beans are also able to load balance at a more fine grained level. Tomcat and other web services of that ilk don't have clustering out of the box.
Flexibility; Now that you've moved your business logic into its own environment you could develop an alternate front end which used the same layer, but say, was a thick client front end for example. Or maybe other contexts would like to make use of the session beans.
Though I should probably point out that if you use web services to communicate with that middle tier, it could also be on tomcat!
The only reason to use a full blown Java EE server is if you need distributed XA transactions, if you don't need XA transactions then you can use Spring + JPA + Tomcat + Bean Validation + JSTL + EL + JSP + Java Mail.
Also a Java EE server is supposed to implement JMS but it does not make sense to run the JMS server in the same VM as the rest of the app server so if you need JMS you should have a separate JMS server.
I strongly disagree with all answers given here.
Everything can be added to Tomcat, including EJB, CDI, JTA, Bean Validation, JAX-RS, etc.
The question is: do you want this? Do you want to assemble all those dependencies in the right versions and test that it all works together, when others have already done this?
Let's be clear: nobody uses only Tomcat! Everyone always adds a web framework, an ioc container, an orm, a transaction manager, web services, etc etc
Lightweight Java EE servers like TomEE already include all of that and makes the full stack experience of having all those things integrated so much better.
Maybe this can be of interest:
http://onjava.com/onjava/2006/02/08/j2ee-without-application-server.html
HTH

How does OSGi manage interaction of components running in separate JVMs?

I have been trying to understand a bit more about the wider picture of OSGi without reading thru the entire specification. As with so many things, the introduction to what OSGi actually is was probably written by someone who had been working on it for a decade and perhaps wasn't best placed to put themselves in the mindset of someone who knows nothing about it :-)
Looking at Felix's example DictionaryService, I don't really understand what is going on. Is OSGi a distinct instance of a JVM into which you load bundles which can then find each other?
Obviously it is not just this because other answers on StackOverflow are explicit that OSGi can solve the dependency problem of a distributed system containing modules deployed within distinct JVMs (plus the FAQ keeps talking about networks).
In this latter case, how does a component running in one JVM interact with another component in a separate JVM? Can the two components "use" each other as if they were running within the same JVM (i.e. via local method calls), and how does OSGi manage the marshalling of data across a network (do you have to use Serializable for example)?
Or does the component author have to use some other distinct mechanism (either provided by OSGi or written themselves) for communication between remote components?
Any help much appreciated!
Yes, OSGi only deals with bundles and services running on the same VM. However, one should note that it is a distinct feature of OSGi that it facilitates running multiple applications (in a controlled way and sharing common modules) on the same JVM at all.
When it comes to accessing services outside the clients JVM, there is currently no standardized solution. Paremus Infiniflow and the derived open-source project Newton use an SCA approach. The upcoming 4.2 release of the OSGi specs will address one side of the problem, namely how to use generic distribution software in such a way that it can bring remote services into the client's JVM.
As somebody mentioned R-OSGi, this approach also deals with the other side of the problem, being how to manage dependencies between distributed OSGi frameworks. Since R-OSGi is not generic distribution software but explicitly deals with the lifecycle issues and dependency management of OSGi bundles.
As far as I know, OSGi does not solve this problem out of the box. There are OSGi-bundles, for example Remote OSGi, which allow the programmer to distribute services across a network.
Not yet, i think it's being worked on for the next release.
But some companies have already implemented distributed osgi. One i'm aware of is Paremus' Infiniflow (http://www.paremus.com/products/products.html). At linkedin they are also working on this. More info here: Building Linkedin next gen architecture with osgi and here: Matt raible: building linkedin next gen architecture
Here's a summary of the changes for OSGI 4.2: Some thoughts on the OSGi R4.2 draft, There's a section on RFC-119 dealing with distributed OSGi.
AFAIK, bundles are running in the same JVM, but are not loaded using the same class loader (that why you can use two different versions of the same bundle at the same time).
To interact with components in another JVM, you must use a network protocol such as rmi.
The OSGi alliance is working on a standard for distributed OSGi:
http://www.osgi.org/download/osgi-4.2-early-draft2.pdf
There even is an early Apache implementation of this new standard:
http://cxf.apache.org/distributed-osgi.html
#Patriarch24
The accepted answer to this question would seem to indicate otherwise (unless I'm misreading it). Also, taken from the FAQ:
The OSGi Service Platform provides the functions to change the composition dynamically on the device of a variety of networks, without requiring a restart
(Emphasis my own). Although in the same FAQ it describes OSGi as in-VM.
Why am I so confused about this? Why is such a basic question about a decade-old technology not clear?
The original problem of OSGI was more related to distribution of code (and then configuration of bundle) than to distribution of execution.
People looking at distributed components are rather looking towards SCA
The "introduction" link is not really an intro, it is a FAQ entry. For more information, see http://www.osgi.org/About/WhatIsOSGi Not hard to find I would think.
Anyway, OSGi is an in-VM SOA. That is, the OSGi Framework is about what happens inside the VM, it provides a framework for structuring your application inside the VM so you can built it too a large extent from components. So the core has nothing to do with distribution, it is completely oblivious of who implements the services, it just provides a mechanism for modules to meet each other in a loosely coupled way.
That said, the µService model reifies the joints between the modules and it turns out that you can build support on top of the framework that provides distribution to the other components. In the last releases we specified some mechanisms that make this standardized in the core and provide a special service Remote Service Admin that can manage a distributed topology.
If you are looking for a distributed OSGi centric Cloud runtime - then the Paremus Service Fabric ( https://docs.paremus.com/display/SF16/Introduction ) provides these capabilities.
One or more Systems each consisting of a number of OSGi assemblies (Blueprint or Declarative Services) can be dynamically deployed and maintained across a population of OSGi runtime Frameworks (Knopflerfish, Felix or Equinox).
A light weight RSA remote framework is provided which provides Service discovery by default using DDS (a seriously good middleware messaging technology) - (thought ZooKeeper and other approach can be used). Currently supported re-moting protocols include RMI and Avro.
Regards
Richard

Resources