How does OSGi approach shared objects between bundles? - osgi

Suppose there are two exported versions of an object, where both have property x but new one introduces a new property y.
How can I create bundle that can accept both versions of an object? Let's assume it will not clone objects, compare them, put into collections etc. Its interaction with object could be as simple as testing whether x != null.
Can serialization be avoided?

Osgi classloading rules are only active at classloading time. If your bundle for example publishs a service that takes an Object as parameter you can give it any instance. Even ones that come from package it does not import.

Christian is correct. To add to that, this is exactly why you should not share your objects directly, but share interfaces. Whilst that still won't make both versions of the interface available to a consumer, at least it will then try to do the right thing and choose the interface that both x and y are compatible with. In such cases, it would have to pick the lowest common denominator.

Related

Changing hyperledger-composer resource definition

So as a project matures it will almost certainly be necessary to modify attributes of the resource definitions to cope with additional requirements.
Let's use two trivial examples - to add a country code to a client address, or to remove a middle initial and swap in a middle name field instead.
Currently if the resource definition changes, composer won't read whatever values are extant in the repository. I didn't exhaustively try all combos, but have had to reconstitute my blockchain at least twice because of this problem.
Is there a way to mark fields either as "new" or "deprecated" to get past this that I overlooked? It will be hard to make a case to move a system that can't be changed forward to production.
In the same vein it doesn't seem to like empty or null strings much (at least for participant attributes). Having an "optional" override somewhere would save a lot of extra bounds checking in my application. Is there one of those I missed too?
So you can use the APIs or REST to expose the legacy data? You may be referring to Playground above (its not really a tool for looking at production data, its for model prototyping/sandbox/testing type stuff).
On optional question - can just add that the field is optional in the model - example here -> https://github.com/hyperledger/composer-sample-networks/blob/master/packages/pii-network/models/pii.cto#L20

Java 8 doesn't provide the same solution to allow multiple inheritance which they gave to solve interface default methods

Problem:
We know that Java doesn’t allow to extend multiple classes because it would result in the Diamond Problem where the compiler could’t decide which superclass method to use. With interface default methods the Diamond Problem were introduction in Java 8. That is, because if a class implements two interfaces, each defining the same default method, and the implementing class doesn’t override the common default method, the compiler couldn’t decide which implementation to chose.
Solution:
Java 8 requires to provide an implementation for default methods implemented by more than one interface. So if a class would implement both interfaces mentioned above, it would have to provide an implementation for the common default method. Otherwise the compiler would throw a compile time error.
Question:
Why is this solution not applicable for multiple class inheritance, by overriding common methods introduced by child class?
You didn’t understand the Diamond Problem correctly (and granted, the current state of the Wikipedia article doesn’t explain it sufficiently). As shown in this graphic,
the diamond problem occurs, when the same class is inherited multiple times through different inheritance paths. This isn’t a problem for interfaces (and never was), as they only define a contract and specifying the same contract multiple times makes no difference.
The main problem is not associated with the methods but the data of that super type. Should the instance state of A exist once or twice in that case? If once, C and B can have different, conflicting constraints on A’s instance state. Both classes might also assume to have full control over A’s state, i.e. not consider that other class having the same access level. If having two different A states, a widening conversion of a D reference to an A reference becomes ambiguous, as either A could be meant.
Interfaces don’t have these problems, as they do not carry instance data at all. They also have (almost) no accessibility issues as their methods are always public. Allowing default methods, doesn’t change this, as default methods still don’t access instance variables but operate with the interface methods only.
Of course, there is the possibility that B and C declared default methods with identical signature, causing an ambiguity that has to be resolved in D. But this is even the case, when there is no A, i.e. no “diamond” at all. So this scenario is not a correct example of the “Diamond Problem”.
Methods introduced by interfaces may always be overriden, while methods introduced by classes could be final. This is one reason why you potentially couldn't apply the same strategy for classes as you could for interfaces.
The conflict described as "diamond problem" can best be illustrated using a polymorphic call to method A.m() where the runtime type of the receiver has type D: Imagine D inherits two different methods both claiming to play the role of A.m() (one of them could be the original method A.m(), at least one of them is an override). Now, dynamic dispatch cannot decide which of the conflicting methods to invoke.
Aside: the distinction betwee the "diamond problem" and regular name clashes is particularly relevant in languages like Eiffel, where the conflict could be locally resolved for the perspective of type D, e.g., by renaming one method. This would avoid the name clash for invocations with static type D, but not for invocations with static type A.
Now, with default methods in Java 8, JLS was amended with rules that detect any such conflicts, requiring D to resolve the conflict (many different cases exist, depending on whether or not some of the types involved are classes). I.e., the diamond problem is not "solved" in Java 8, it is just avoided by rejecting any programs that would produce it.
In theory, similar rules could have been defined in Java 1 to admit multiple inheritance for classes. It's just a decision that was made early on, that the designers of Java did not want to support multiple inheritance.
The choice to admit multiple (implementation) inheritance for default methods but not for class methods is a purely pragmatic choice, not necessitated by any theory.

Multiple Controllers appropriate with one entity in spring framework

I'm starting to develop website that use the spring framework.I have three controller.There are newCustomerController,editCustomerController and deleteCustomerController.These controllers are mapped with view that use for create update and delete, but I create only customer.
So, I would like to know.Is it appropriate to declare the controllers like this.
Thank
The answer to this question is subjective and maybe more a topic for https://softwareengineering.stackexchange.com/. However, there is something very spring related about it that I would like to comment.
There are a few principles that attempt at guiding developers of how to strike a good balance when thinking about designing the classes. One of those is the Single responsibility principle.
In object-oriented programming, the single responsibility principle
states that every class should have a single responsibility, and that
responsibility should be entirely encapsulated by the class. All its
services should be narrowly aligned with that responsibility
A catchier explanation is
A class or module should have one, and only one, reason to change.
However, its still often hard to reason about it properly.
Nevertheless, Spring gives you means for it (think of this statement as a poetic freedom of interpretation). Embrace constructor based dependency injection. There are quite a few reasons why you should consider constructor based dependency injection, but the part relevent to your question is adressed in the quote from the blog
An often faced argument I get is: “Constructors just get too verbose
if I have 6 or 7 dependencies. With fields only, this is fine”.
Awesome, you’ve effectively worked around a clear indicator that the
code you write is doing way too much. An increase in the number of
dependencies a type has should hurt, as it makes you think about
whether you should split up the component into multiple ones.
In other words, if you stick to constructor based injection, and your constructor turns a bit ugly, the class is most likely doing too much and you should consider redesigning.
The same works the other way around, if your operations are a part of the logical whole (like CRUD operations), and they use the same dependencies (now "measurable" by the count and the type of the injected deps) with no clear ideas of what can cause the operations to evolve independently of each other, than no reason to split to separate classes/components.
It should be better if you define one controller for Customer class and in that class you should have all methods related to customer operations (edit,delete,create and read).

Prism, mapping region to a view

I'm quite new to Prism. I'm studying QuickStarts shipped with it as well as other examples on the net. Almost all of them make modules aware of what region their view(s) get dropped into. Typically, the method Initalize of a module has a line like the the following.
RegionManager.Regions["LeftRegion"].Add(fundView);
I feel quite uncomfortable with that. There's a similar discussion but I think that it should be the responsibility of the shell component to define such mapping. However, I cannot find any example of such approach and I'm not sure whether the bootstrapper is the right place to put such mapping in.
Is this approach completely wrong?
Nothing is completely wrong. But it makes no sense to have the shell/bootstrapper (that by design doesn't know anything about the application it will host) knows what view goes into which region.
Consider an application that can be extended by simply adding modules into a given folder. When you follow the approach that the module knows where it's views want to reside (the mapping is done in Initialize()), this is no problem. I designed my first Prism application that way.
But if your mapping is done in your shell you always have to update your shell (which is part of the base application, not any module) when you want to add another module. This runs contrary to the loosely coupling paradigm. Besides that you have to create one base application for every module constellation. And there are (2^number of modules) permutations you have to cover. That results in loosing your flexibility you gained by using Prism.

aws-sdk-ruby AWS::Record::Base records sharing the same domain

We are using the aws-sdk for ruby, specifically AWS::Record::Base.
For various reasons we need to put records of various objects within the same domain in sdb.
The approach we thought we'd use here would be to add an attribute to each object that contains the object name and then include that in the where clause of finder methods when obtaining objects from sdb.
My questions to readers are:
what are your thoughts on this approach?
how would this be best implemented tidily? How is it best to add a default attribute included in an object without defining it explicitly in each model? Is overriding find or where in the finder methods sufficient to ensure that obtaining objects from sdb includes the clauses considering the new default attribute?
Thoughts appreciated.
It really depends on your problem, but I find it slightly distasteful. Variant records are fine and dandy, but when you start out with apples and dinosaurs and they have no common attributes, this approach has no benefit that I know of [aside from conserving your (seemingly pointless) quota of 250 SimpleDB domains]. If your records have something in common, then I can see where this approach might be useful, but someone scarred like me by legacy systems with variant records in Btrieve (achieved through C unions) has a hardwired antipathy toward this approach.
The cleanest approach I can think of is to have your models share a common parent through inheritance. The parent could then know of the child types, and implement the query appropriately. However, this design is definitely not SOLID and violates the Law of Demeter.

Resources