Java 8 vs Java 7 Collection Interface: self-referential instance - java-8

I was reading Java 8 collection interface documentation. I noticed Java 8 Collection Interface added this paragraph in the description which is not included in Java 7 Collection Interface .
Some collection operations which perform recursive traversal of the
collection may fail with an exception for self-referential instances
where the collection directly or indirectly contains itself. This
includes the clone(), equals(), hashCode() and toString() methods.
Implementations may optionally handle the self-referential scenario,
however most current implementations do not do so.
I am a little confused as of why this paragraph is included. Is it because Java 7 cannot have self-referential instances where collection directly or indirectly contains itself? Then Java 8 introduced new interface or some new feature that allows that?
I am looking for detail explanations and it will be great if you includes an example to illustrate your point.

Is it because Java7 cannot have self-referential instances where
collection directly or indirectly contains itself?
Then Java8 introduced new interface or some new feature that allows
that?
I don't think so.
Before Java 8, instances may of course have self reference.
And using them in a Collection may create of course infinite loops and failure at runtime with a StackOverflowError thrown.
Here two classes where instance fields have circular dependencies between them and which the toString() method of each class relies on their own fields.
Parent that refers to Child :
public class Parent {
private List<Child> childs;
public Parent(List<Child> childs) {
this.childs = childs;
}
#Override
public String toString() {
return "Parent [childs=" + childs + "]";
}
}
Child that refers to Parent :
public class Child {
private Parent parent;
public Parent getParent() {
return parent;
}
public void setParent(Parent parent) {
this.parent = parent;
}
#Override
public String toString() {
return "Child [parent=" + parent + "]";
}
}
Suppose now you create a Child and a associated Parent :
List<Child> childs = new ArrayList<>();
Child child = new Child();
childs.add(child);
Parent parent = new Parent(childs);
child.setParent(parent);
Now you can invoke :
parent.toString();
child.toString();
or on a Collection instance such as :
childs.toString();
You will get exactly the same result : java.lang.StackOverflowError
as the child invokes the parent that invokes the child that invokes the parent and so on...
The doc was very probably updated with Java 8 to enforce the risk of implementing these methods in a brittle way as the Collection implementations generally don't address it and it makes sense as hiding the malfunctioning of a bugged client code has to be avoided otherwise the problem will never be solved.
"Implementations may optionally handle the self-referential scenario,
however most current implementations do not do so."

The toString() format for List is specified; it must recursively toString() the contents and render them in a comma-separated list delimited by [ ... ]. If you create a List as follows:
List<Object> list = new ArrayList<>();
list.add(list);
System.out.println(list);
a naive implementation of toString() would throw StackOverflowError (try it.) The Collection implementations try to defend against this problem for some core methods, in some cases; that's what this paragraph is saying.

Related

Spring JPA Transient list not initialized

Since I want to use ObservableLists in my entities classes within JavaFX, originally I had a problem with the dedicated List implementation and injection over reflection that Hibernate is using by default. Therefore I decided to have my Entity classes annotated with #Access(AccessType.PROPERTY), because I want to enforce that Hibernate uses my getter and setter methods and not reflection.
In a particular class I have a number of List attributes e.g. protected List<CostEstimate> costEstimates;. For each of these lists I have getters and setters which are annotated accordingly. So far so good, that seems to work.
The trouble is that in my UI I don't want to show e.g. all costEstimates that were created over time, but only the last one. So I created a method public CostEstimate getLastCostEstimate() which would return only the last element from the List. This method is annotated with #Transient since there is no matching column in the MySQL database, since it only returns the last element from the related list.
My controller class binds getLastCostEstimate() of the entity to the according UI element.
In the default constructor of my entity class the costEstimates list is initialized with a initial default estimate, such that getLastCostEstimate() should always return a meaningful CostEstimate. In the debugger I can see that this initialization is executed. However, at run time the costEstimates list is empty and I get an IndexOutOfBoundsException. I assume that has to do with the #Transient annotation ?! I wonder whether I have a coding or design issue? I guess my question is: how to model this "give me only the last element from a list" in a JPA entity class (without too much boilerplate code)?
Thank you for your help!
For your convenience please find following some related code snippets:
#Entity
#Inheritance
#Access(AccessType.PROPERTY) // JPA reading and writing attributes through their accessor getter and setter methods
public abstract class UserRequirement extends Requirement implements Serializable {
..
protected List<CostEstimate> costEstimates; // Money needed
..
protected UserRequirement() {
..
costEstimates = FXCollections.observableArrayList();
setLastCostEstimate(new CostEstimate(0));
..
}
..
#OneToMany(mappedBy="parent", orphanRemoval = true, cascade = CascadeType.PERSIST, fetch = FetchType.LAZY)
public List<CostEstimate> getCostEstimates() {
return costEstimates;
}
#SuppressWarnings("unused") // Used by JPA
public void setCostEstimates(List<CostEstimate> newCostEstimates) {
if(newCostEstimates != null) {
((ObservableList<CostEstimate>)costEstimates).setAll(newCostEstimates);
} else {
costEstimates.clear();
}
}
#Transient
public CostEstimate getLastCostEstimate() {
return costEstimates.get(costEstimates.size()-1);
}
public void setLastCostEstimate(CostEstimate costEstimate) {
if(costEstimate = null) {
return;
}
costEstimates.add(costEstimate);
}
..
}

Why is the Java 8 Optional class final? [duplicate]

I was playing with the following question: Using Java 8's Optional with Stream::flatMap and wanted to add a method to a custom Optional<T> and then check if it worked.
More precise, I wanted to add a stream() to my CustomOptional<T> that returns an empty stream if no value is present, or a stream with a single element if it is present.
However, I came to the conclusion that Optional<T> is declared as final.
Why is this so? There are loads of classes that are not declared as final, and I personally do not see a reason here to declare Optional<T> final.
As a second question, why can not all methods be final, if the worry is that they would be overridden, and leave the class non-final?
According to this page of the Java SE 8 API docs, Optional<T> is a value based class. According to this page of the API docs, value-based classes have to be immutable.
Declaring all the methods in Optional<T> as final will prevent the methods from being overridden, but that will not prevent an extending class from adding fields and methods. Extending the class and adding a field together with a method that changes the value of that field would make that subclass mutable and hence would allow the creation of a mutable Optional<T>. The following is an example of such a subclass that could be created if Optional<T> would not be declared final.
//Example created by #assylias
public class Sub<T> extends Optional<T> {
private T t;
public void set(T t) {
this.t = t;
}
}
Declaring Optional<T> final prevents the creation of subclasses like the one above and hence guarantees Optional<T> to be always immutable.
As others have stated Optional is a value based class and since it is a value based class it should be immutable which needs it to be final.
But we missed the point for this. One of the main reason why value based classes are immutable is to guarantee thread safety. Making it immutable makes it thread safe. Take for eg String or primitive wrappers like Integer or Float. They are declared final for similar reasons.
Probably, the reason is the same as why String is final; that is, so that all users of the Optional class can be assured that the methods on the instance they receive keep to their contract of always returning the same value.
Though we could not extend the Optional class, we could create our own wrapper class.
public final class Opt {
private Opt() {
}
public static final <T> Stream<T> filledOrEmpty(T t) {
return Optional.ofNullable(t).isPresent() ? Stream.of(t) : Stream.empty();
}
}
Hope it might helps you. Glad to see the reaction!

Handle extra elements outside of deserialized class

Putting extra elements property in the class to support backward/forward compatibility and implement ISupportInitialize seems ugly for me and it is also OCP violation.
I want to handle extra elements outside of deserialized class and run some migration logic in external classes.
I mean that Bson serializer will put extra elements data somewhere not on the deserialized class and than after finishing all deserialization staff call some migrator for loaded object.
That way I can can support compatibility between fetched document (that may be in older or newest version) and currently running code.
Something like:
public interface IMigrate<T>
{
void Migrate(T obj, BsonDocument extraElements)
}
public class MigrateClazzA : IMigrate<ClazzA>
{
public void Migrate(ClazzA obj, BsonDocument extraElements)
{
...
}
}
How can I do it?

Annotations for Java enum singleton

As Bloch states in Item 3 ("Enforce the singleton property with a private constructor or an enum type") of Effective Java 2nd Edition, a single-element enum type is the best way to implement a singleton. Unfortunately the old private constructor pattern is still very widespread and entrenched, to the point that many developers don't understand what I'm doing when I create enum singletons.
A simple // Enum Singleton comment above the class declaration helps, but it still leaves open the possibility that another programmer could come along later and add a second constant to the enum, breaking the singleton property. For all the problems that the private constructor approach has, in my opinion it is somewhat more self-documenting than an enum singleton.
I think what I need is an annotation which both states that the enum type is a singleton and ensures at compile-time that only one constant is ever added to the enum. Something like this:
#EnumSingleton // Annotation complains if > 1 enum element on EnumSingleton
public enum EnumSingleton {
INSTANCE;
}
Has anyone run across such an annotation for standard Java in public libraries anywhere? Or is what I'm asking for impossible under Java's current annotation system?
UPDATE
One workaround I'm using, at least until I decide to actually bother with rolling my own annotations, is to put #SuppressWarnings("UnusedDeclaration") directly in front of the INSTANCE field. It does a decent job of making the code look distinct from a straightforward enum type.
You can use something like this -
public class SingletonClass {
private SingletonClass() {
// block external instantiation
}
public static enum SingletonFactory {
INSTANCE {
public SingletonClass getInstance() {
return instance;
}
};
private static SingletonClass instance = new SingletonClass();
private SingletonFactory() {
}
public abstract SingletonClass getInstance();
}
}
And you can access in some other class as -
SingletonClass.SingletonFactory.INSTANCE.getInstance();
I'm not aware of such an annotation in public java libraries, but you can define yourself such a compile time annotation to be used for your projects. Of course, you need to write an annotation processor for it and invoke somehow APT (with ant or maven) to check your #EnumSingleton annoted enums at compile time for the intended structure.
Here is a resource on how to write and use compile time annotations.

Entity Framework Model with Table-per-Type Inheritance

When I define a model in Entity Framework with Table-per-Type Inheritance, if I have a base class/table (not abstract) called person, and two sub entities and tables, adult and child, after creating a child, how would I take the same object and convert it to adult? Once converted to adult, the child record should be deleted, though the base class data in the person table should be retained.
It is not possible. It is similar problem like here. Simply the entity exists and its type is immutable. The only way to do that is delete child entity (= records from both tables) and create a new adult entity (= new records for both tables).
This doesn't look like scenario for inheritance at all.
Edit:
The comment about inheritance was targeted for scenario where you mentioned Person, Adult and Child entities. Anyway once your scenario allows changing the type you should think about another solution where the part which can change will be handled by composition.
For example:
public class DataSource
{
public int Id { get; set; }
public virtual DataSourceFeatures Features { get; set; }
}
public class DataSourceFeatures
{
[Key, ForeignKey("DataSource")]
public int Id { get; set; }
public virtual DataSource DataSource { get; set; }
}
public class XmlDataSourceFeatures : DataSourceFeatures { ... }
public class DelimitedDataSourceFeatures : DataSourceFeatures { ... }
public class ServiceDataSourceFeatures : DataSourceFeatures { ... }
Now changing a type means deleting dependent current DataSourceFeatures from the database and create a new one but original object remains the same - only relation changes.
I wouldn't do this with EF, because with inheritance you've created an object-oriented abstraction over table relationships that doesn't allow you to convert from different types. In OO you can't do thing like this:
Child child = new Child();
Adult grownUp = child;
And then expect the child to be an adult. You'd do it like this:
Child child = new Child();
Adult grownUp = child.GrowUp();
So assuming you're using SQL Server you could do that with a stored procedure. Something like GrowUp(child) and have it create a new entry in the Adult table as well as delete the entry in the Child table, but leave Person untouched. You could return the new adult object from the procedure. This can then be used like this:
Adult grownUp = context.GrowUp(child);
However you'd need to make sure in your code that after this line you don't use the child object anymore and you probably need to refresh or remove it from the context (not entirely sure about this).

Resources