Can an OSGI library include Forms and views or is it restricted to just the XPages elements?
An XSP Library (one type of OSGi plugin that is directly applicable to XPages -- DOTS is another instance of this type) can contribute any artifact type that is defined in the XPages configuration file format (a.k.a. "xsp-config"). This wiki article is a good overview of creating these kinds of artifacts. I also recommend looking at the source code of the XSP Starter Kit project on OpenNTF, as it contains reference implementations of many different types of XSP artifacts, including several that aren't listed in the above wiki references.
Since the XPages architecture was largely inspired by JSF, the vast majority of the types of artifacts you can distribute in this manner are not inherently associated with Domino -- rather, you're defining concrete implementations of the same concepts used by developers working with other JSF implementations (e.g. JBoss RichFaces, Apache MyFaces). As such, an XSP Library is not designed for distribution of design elements traditionally associated with the Lotus Notes client, such as Forms / Subforms / Views, etc. (traditional design elements that should behave consistently across multiple applications should continue to be distributed using Domino's design element inheritance features).
Well-designed XSP Library artifacts are, therefore, loosely coupled in this regard: like some of the data sources that ship with the platform, which are designed to be passed properties like formName or viewName in order to define each instance's relationship to the back-end data model, but make no assumptions about the contents or design thereof, each custom XSP Library artifact should be designed to perform a specific function independently of anything else the library (or application) might contain, and support a set of properties sufficient to instruct it how to perform that function differently than another instance of the same artifact might.
Related
I want to implement a Spring Data Repository for a database which is not currentlty supported (hyphothetical question - no need to ask about the database).
How is this possible and where can I have an example of that?
Short answer is "yes, definitely". One of the main Spring-data's intentions is to unify access to different data storage technologies under same API style. So you can implement spring-data adapter for any database as long as it is worth implementing a connector to that database in Java (which is definitely possible for the majority of databases).
Long answer would take several blog posts or even a small book :-) But let me just highlight couple of moments. Each of the existing spring-data modules expose one of (or both) the API flavors:
imperative - in a form of various template classes (e.g. RedisTemplate). It is mostly for the databases that don't have query language, but only a programmatic API. So you're just wrapping your db's API into template class and you're done.
declarative - in a form of so called Declarative Repositories, quite sophisticated mechanism of matching annotations on method signatures or method signatures themselves to a db's native queries. Luckily spring-data-commons module provides a lot of scaffolding and common infrastructure code for this, so you just need to fill the gaps for your specific data storage mechanism. You can look at slide deck from my conference talk, where I explained on a high level the mechanics of how particular spring-data module generates real implementations of repositories based on user declarations. Or you can just go into any of the existing modules and look into source code. The most interesting parts there are usually RepositoryFactory and QueryLookupStrategy implementations.
That is extremely simplified view of the spring-data concepts. In order to get more detailed information and explanations of core principles, I'd suggest reading spring-data-commons reference documentation and having a look at spring-data-keyvalue project, which is a good starting point to implement Spring Data Module for key-value storages.
I've been an AEM developer for almost a year now. I know AEM uses 'Declarative Services component framework' to manage life cycle of OSGi components.
Consider a scenario when i would export a package from a bundle and import that package from another bundle, i could create objects of classes in first bundle inside second bundle as well. it's a import-export contract in this case.
my question is when i should be using component framework to manage the lifecycle of my objects and when to handle it myself by creating them when required.
In an ideal design, you would NOT in fact be able to create objects from the exported package; because that package would contain only interfaces. This makes it a "pure" contract (API) export. If there are classes in there that you can directly instantiate, then they are implementation classes.
In general it is far better to export only pure APIs and to keep implementation classes hidden. There are two main reasons:
Implementation classes tend to have downstream dependencies. If you depend directly from implementation class to implementation class then you get a very large and fragile dependency graph... and eventually that graph will contain a cycle. In fact it's almost inevitable that it will. At that point, your application is not modular because you cannot deploy or change any part of it independently.
Pure interfaces can be analysed for compatibility between versions. As a consumer or a provider of an API, you know exactly which versions of the API you can support because the API does not contain executable code. However if you have a dependency onto an implementation class, then you never really know when they break compatibility because the breakage could happen deep down in executable code that you can't easily analyse.
If your objects are services then there's no question, they have to be OSGi components.
For other things, my first choice is OSGi components, unless they're trivial objects like data holders or something similar.
If an object requires configuration or refers to OSGi services then it's also clearly an OSGi component.
In general, it's best IMO to think in services and define your package exports as the minimum that allows other bundles to use a bundle's services. Unless a bundle is clearly a reusable library like commons-io (to take a simple example).
Just a general question on any techniques used to seperate your web application for customer specific requirements. At the moment I have one web application but I need to add new functionality for one customer thats not needed by another. I know spring 3 comes with new support for profiles but I'm just curious if anyone has had a similar problem and how they went about solving it particularly using spring mvc and maven as a build management tool
The proper way to do this would be as follows:
Have a web assembly module. This module will build a war file containing the proper features extracted into separate modules simply defined as dependencies. My advice is to have a separate web assembly project per client. This way you will keep things neat for yourself, avoid mix-ups (such as releasing features to clients who haven't paid for them) and have an overall easier maintenance.
Furthermore decide whether to do your version separation at the level of the version tag or classifier:
The version tag you can use in order to separate things in branches.
The classifier tag you can also use to separate configurations specific to your clients.
My project is organized as follows
ASP.NET MVC 3 WebApp
Domain.Core - Interfaces [library]
Domain.Core.Entity - Entities [library]
Infrastructure.Core - Implementation of Interfaces [library]
Infrastructure.IoC - Uses Unity as a means of achieving Inversion of Control [library]
The predicament is as follows:
If I were to add a generic method to an interface in my Domain.Core, such as the following, I get a compile error that asks me to add a reference to Domain.Core.Entity in the Infrastructure.IoC project.
T Get<T>(int Id) where T : EntityBase, new();
T can be a class EntityBase or Blog which inherits from EntityBase or a few other entities that all inherit from EntityBase. The idea is to return an entity based on what child class is provided so that the child class is loaded with the default data that is required for all classes that implement EntityBase.
So, the question is two fold:
Is it better to not reference the Domain.Core.Entity project in the IoC project and keep things clean?
How would I achieve something like the above without having to muddy the cleanliness of references?
Thank you.
Note: I went through a few of the questions here to search for this topic but didn't see any. If you do see it, then let me know and I will delete this question.
With Dependency Injection, it's best to have a single component with the responsibility of composing all the various collaborators. This is the third party in Nat Pryce's concept of Third-Party Connect, or what I call the Composition Root. In the architecture outlined above, this responsibility seems to fall on Infrastructure.IoC, so given this structure, there's nothing wrong with adding a reference from Infrastructure.IoC to Domain.Core.Entity - in fact, I would find it more surprising if it were not so.
However, as an overall bit of feedback, I think it's worth considering what benefit is actually being derived from having a separate Infrastructure.IoC library. The entry point of that application (the ASP.NET MVC application) will need to have a reference to Infrastructure.IoC, which again must have a reference to all libraries in order to compose them. Thus, the ASP.NET MVC application ends up having an indirect reference to all other libraries, and you might as well merge those two.
(Technically, you can decouple the various libraries by relying on some sort of late binding mechanism such as XML configuration or Convention over Configuration, but the conceptual responsibility of Infrastructure.IoC is going to remain the same.)
For more information, you may want to read this answer: Ioc/DI - Why do I have to reference all layers/assemblies in entry application?
Why split the Domain.Core from Domain.Core.Entity? And why split Infrastructure.Core from Infrastructure.IoC?
I think you can get away with a 3 project structure:
Domain - has no dependencies on the other 2 projects. May contain entities and interfaces.
Infrastructure - contains interface implementations and the Unity container. Depends only on Domain project.
MVC - depends on both Domain and Infrastructure.
If you are worried about programming against concrete classes instead of interfaces, give the classes in the Infrastructure project a different namespace. You should then only have to use that namespace maybe a couple of times (in Global.asax or bootstrapper) if at all.
This way it is very clear what projects depend on what. Since you are using unity, this is a form of onion architecture. If you find later that you should split Infrastructure or Domain out into more than 1 project, you can refactor.
IMO, dependencies only get muddy or unclean when you have references like System.Web.Mvc or Microsoft.Practices.Unity in your domain project(s). Like Mark says in his answer, the outer layers of your "onion" will have dependencies on the inner layers, there's not much you can do to avoid that. But try to make the domain concentrate on its core business, avoiding as much detail of how it will be used in a UI as possible.
I have been trying to understand a bit more about the wider picture of OSGi without reading thru the entire specification. As with so many things, the introduction to what OSGi actually is was probably written by someone who had been working on it for a decade and perhaps wasn't best placed to put themselves in the mindset of someone who knows nothing about it :-)
Looking at Felix's example DictionaryService, I don't really understand what is going on. Is OSGi a distinct instance of a JVM into which you load bundles which can then find each other?
Obviously it is not just this because other answers on StackOverflow are explicit that OSGi can solve the dependency problem of a distributed system containing modules deployed within distinct JVMs (plus the FAQ keeps talking about networks).
In this latter case, how does a component running in one JVM interact with another component in a separate JVM? Can the two components "use" each other as if they were running within the same JVM (i.e. via local method calls), and how does OSGi manage the marshalling of data across a network (do you have to use Serializable for example)?
Or does the component author have to use some other distinct mechanism (either provided by OSGi or written themselves) for communication between remote components?
Any help much appreciated!
Yes, OSGi only deals with bundles and services running on the same VM. However, one should note that it is a distinct feature of OSGi that it facilitates running multiple applications (in a controlled way and sharing common modules) on the same JVM at all.
When it comes to accessing services outside the clients JVM, there is currently no standardized solution. Paremus Infiniflow and the derived open-source project Newton use an SCA approach. The upcoming 4.2 release of the OSGi specs will address one side of the problem, namely how to use generic distribution software in such a way that it can bring remote services into the client's JVM.
As somebody mentioned R-OSGi, this approach also deals with the other side of the problem, being how to manage dependencies between distributed OSGi frameworks. Since R-OSGi is not generic distribution software but explicitly deals with the lifecycle issues and dependency management of OSGi bundles.
As far as I know, OSGi does not solve this problem out of the box. There are OSGi-bundles, for example Remote OSGi, which allow the programmer to distribute services across a network.
Not yet, i think it's being worked on for the next release.
But some companies have already implemented distributed osgi. One i'm aware of is Paremus' Infiniflow (http://www.paremus.com/products/products.html). At linkedin they are also working on this. More info here: Building Linkedin next gen architecture with osgi and here: Matt raible: building linkedin next gen architecture
Here's a summary of the changes for OSGI 4.2: Some thoughts on the OSGi R4.2 draft, There's a section on RFC-119 dealing with distributed OSGi.
AFAIK, bundles are running in the same JVM, but are not loaded using the same class loader (that why you can use two different versions of the same bundle at the same time).
To interact with components in another JVM, you must use a network protocol such as rmi.
The OSGi alliance is working on a standard for distributed OSGi:
http://www.osgi.org/download/osgi-4.2-early-draft2.pdf
There even is an early Apache implementation of this new standard:
http://cxf.apache.org/distributed-osgi.html
#Patriarch24
The accepted answer to this question would seem to indicate otherwise (unless I'm misreading it). Also, taken from the FAQ:
The OSGi Service Platform provides the functions to change the composition dynamically on the device of a variety of networks, without requiring a restart
(Emphasis my own). Although in the same FAQ it describes OSGi as in-VM.
Why am I so confused about this? Why is such a basic question about a decade-old technology not clear?
The original problem of OSGI was more related to distribution of code (and then configuration of bundle) than to distribution of execution.
People looking at distributed components are rather looking towards SCA
The "introduction" link is not really an intro, it is a FAQ entry. For more information, see http://www.osgi.org/About/WhatIsOSGi Not hard to find I would think.
Anyway, OSGi is an in-VM SOA. That is, the OSGi Framework is about what happens inside the VM, it provides a framework for structuring your application inside the VM so you can built it too a large extent from components. So the core has nothing to do with distribution, it is completely oblivious of who implements the services, it just provides a mechanism for modules to meet each other in a loosely coupled way.
That said, the µService model reifies the joints between the modules and it turns out that you can build support on top of the framework that provides distribution to the other components. In the last releases we specified some mechanisms that make this standardized in the core and provide a special service Remote Service Admin that can manage a distributed topology.
If you are looking for a distributed OSGi centric Cloud runtime - then the Paremus Service Fabric ( https://docs.paremus.com/display/SF16/Introduction ) provides these capabilities.
One or more Systems each consisting of a number of OSGi assemblies (Blueprint or Declarative Services) can be dynamically deployed and maintained across a population of OSGi runtime Frameworks (Knopflerfish, Felix or Equinox).
A light weight RSA remote framework is provided which provides Service discovery by default using DDS (a seriously good middleware messaging technology) - (thought ZooKeeper and other approach can be used). Currently supported re-moting protocols include RMI and Avro.
Regards
Richard