static vs default method - Functional interfaces - java-8

I am creating functional interfaces and want to reuse default methods with anonymous implementation.
public class JavaInterfaceTest {
public static void main(String[] args) {
FunctionalIntf fi = () -> {
System.out.println("In ananymus impl, trying to call default method");
// doInternal()
return "Hello";
};
fi.doFunction(); // How this line valid ?
fi.doInternal();
FunctionalIntf.doSomething();
}
}
#FunctionalInterface
interface FunctionalIntf {
String doFunction();
default void doInternal(){
System.out.println("In doInternal");
}
static void doSomething(){
System.out.println("In doSomething");
}
}
How fi.doFunction(); is valid, if I go thru anonymous implementation.
How can I re-sue default method or static method if I want from implementation?
3. Is returning something valid/best practice in my case as I can not handle the returned value.

When you are creating the anonymous class, you actually provide an implementation for your abstract method doFunction() from FunctionalIntf interface. So when you are using this line of code:
fi.doFunction();
It means that you are calling doFunction() method from the anonymous class. This is another example on how functional interfaces work:
Runnable r = new Runnable() {
#Override
public void run() {
System.out.println("I'm Runnable!");
}
};
r.run();
In this case we override run() method from Runnable interface, which is also a functional interface.
You cannot provide another implementation for the static method because you cannot override a static method. Static methods are not inherited in Java at all. You can instead provide another implementation for the default method by overriding as mentioned in my above example.
Regarding the returned value, you need to define your method to return the exact value you need. There is no best practice in that.

When you implement a functional interface, you have 2 options:
Use lambdas, as you have
Use inner classes.
If you use lambdas, like you have, you have the default implementation of the static/default methods along with the implementation of the abstract method.
If you use an inner class, the normal rules of overriding applies.

Related

Intercepting method for execution time using Aspect [duplicate]

Please explain, why self invocation on proxy performed on target but not proxy? If that made on purpose, then why? If proxies created by subclassing, it's possible to have some code executed before each method call, even on self invocation. I tried, and I have proxy on self invocation
public class DummyPrinter {
public void print1() {
System.out.println("print1");
}
public void print2() {
System.out.println("print2");
}
public void printBoth() {
print1();
print2();
}
}
public class PrinterProxy extends DummyPrinter {
#Override
public void print1() {
System.out.println("Before print1");
super.print1();
}
#Override
public void print2() {
System.out.println("Before print2");
super.print2();
}
#Override
public void printBoth() {
System.out.println("Before print both");
super.printBoth();
}
}
public class Main {
public static void main(String[] args) {
DummyPrinter p = new PrinterProxy();
p.printBoth();
}
}
Output:
Before print both
Before print1
print1
Before print2
print2
Here each method called on proxy. Why in documentation mentioned that AspectJ should be used in case of self invocation?
Please read this chapter in the Spring manual, then you will understand. Even the term "self-invocation" is used there. If you still do not understand, feel free to ask follow-up questions, as long as they are in context.
Update: Okay, now after we have established that you really read that chapter and after re-reading your question and analysing your code I see that the question is actually quite profound (I even upvoted it) and worth answering in more detail.
Your (false) assumption about how it works
Your misunderstanding is about how dynamic proxies work because they do not work as in your sample code. Let me add the object ID (hash code) to the log output for illustration to your own code:
package de.scrum_master.app;
public class DummyPrinter {
public void print1() {
System.out.println(this + " print1");
}
public void print2() {
System.out.println(this + " print2");
}
public void printBoth() {
print1();
print2();
}
}
package de.scrum_master.app;
public class PseudoPrinterProxy extends DummyPrinter {
#Override
public void print1() {
System.out.println(this + " Before print1");
super.print1();
}
#Override
public void print2() {
System.out.println(this + " Before print2");
super.print2();
}
#Override
public void printBoth() {
System.out.println(this + " Before print both");
super.printBoth();
}
public static void main(String[] args) {
new PseudoPrinterProxy().printBoth();
}
}
Console log:
de.scrum_master.app.PseudoPrinterProxy#59f95c5d Before print both
de.scrum_master.app.PseudoPrinterProxy#59f95c5d Before print1
de.scrum_master.app.PseudoPrinterProxy#59f95c5d print1
de.scrum_master.app.PseudoPrinterProxy#59f95c5d Before print2
de.scrum_master.app.PseudoPrinterProxy#59f95c5d print2
See? There is always the same object ID, which is no surprise. Self-invocation for your "proxy" (which is not really a proxy but a statically compiled subclass) works due to polymorphism. This is taken care of by the Java compiler.
How it really works
Now please remember we are talking about dynamic proxies here, i.e. subclasses and objects created during runtime:
JDK proxies work for classes implementing interfaces, which means that classes implementing those interfaces are being created during runtime. In this case there is no superclass anyway, which also explains why it only works for public methods: interfaces only have public methods.
CGLIB proxies also work for classes not implementing any interfaces and thus also work for protected and package-scoped methods (not private ones though because you cannot override those, thus the term private).
The crucial point, though, is that in both of the above cases the original object already (and still) exists when the proxies are created, thus there is no such thing as polymorphism. The situation is that we have a dynamically created proxy object delegating to the original object, i.e. we have two objects: a proxy and a delegate.
I want to illustrate it like this:
package de.scrum_master.app;
public class DelegatingPrinterProxy extends DummyPrinter {
DummyPrinter delegate;
public DelegatingPrinterProxy(DummyPrinter delegate) {
this.delegate = delegate;
}
#Override
public void print1() {
System.out.println(this + " Before print1");
delegate.print1();
}
#Override
public void print2() {
System.out.println(this + " Before print2");
delegate.print2();
}
#Override
public void printBoth() {
System.out.println(this + " Before print both");
delegate.printBoth();
}
public static void main(String[] args) {
new DelegatingPrinterProxy(new DummyPrinter()).printBoth();
}
}
See the difference? Consequently the console log changes to:
de.scrum_master.app.DelegatingPrinterProxy#59f95c5d Before print both
de.scrum_master.app.DummyPrinter#5c8da962 print1
de.scrum_master.app.DummyPrinter#5c8da962 print2
This is the behaviour you see with Spring AOP or other parts of Spring using dynamic proxies or even non-Spring applications using JDK or CGLIB proxies in general.
Is this a feature or a limitation? I as an AspectJ (not Spring AOP) user think it is a limitation. Maybe someone else might think it is a feature because due to the way proxy usage is implemented in Spring you can in principle (un-)register aspect advices or interceptors dynamically during runtime, i.e. you have one proxy per original object (delegate), but for each proxy there is a dynamic list of interceptors called before and/or after calling the delegate's original method. This can be a nice thing in very dynamic environments. I have no idea how often you might want to use that. But in AspectJ you also have the if() pointcut designator with which you can determine during runtime whether to apply certain advices (AOP language for interceptors) or not.
Solutions
What you can do in order to solve the problem is:
Switch to native AspectJ, using load-time weaving as described in the Spring manual. Alternatively, you can also use compile-time weaving, e.g. via AspectJ Maven plugin.
If you want to stick with Spring AOP, you need to make your bean proxy-aware, i.e. indirectly also AOP-aware, which is less than ideal from a design point of view. I do not recommend it, but it is easy enough to implement: Simply self-inject a reference to the component, e.g. #Autowired MyComponent INSTANCE and then always call methods using that bean instance: INSTANCE.internalMethod(). This way, all calls will go through proxies and Spring AOP aspects get triggered.
#Dolphin
It is late reply but maybe this text will help you: Spring AOP and self-invocation.
In short, the link leads you to a simple example of why running code from another class will work but from "self" it won't.
Notice looking at the example from the link that when you run code from another class you are asking Spring to inject the bean. And Spring sees that in the bean you are asking for a cache, it creates at runtime proxy for that bean.
On the other hand, when you do the same in the "self" class, you create a compile time method call and Spring won't do anything about it.

C# Attribute or Code Inspection Comment to Encourage or Discourage Call to Base Method from Virtual Method Override

I'm working on a C# project in Unity with Rider.
I sometimes see a base class with an empty virtual method, and then a derived class that overrides that method. The method override has an explicit call to base.MethodName() even though the base method is empty.
public class A
{
public virtual void Method1() { }
public virtual void Method2()
{
// Important logic performed here!
}
}
public class B : A
{
public override void Method1()
{
base.Method();
// Do something else ...
}
public override void Method2()
{
// Do something here ...
}
}
When looking at the method in Rider's IL Viewer, the call to the base method is included, even though the method is empty.
Are there any method attributes or code inspection comments in C# or Rider that could:
Generate a compiler or code inspection warning when calling a base method that is empty.
Generate a compiler or code inspection warning when not calling a base method that is not empty.
For example:
public class A
{
[OmitCallFromOverride]
public virtual void Method1() { }
[RequireCallFromOverride]
public virtual void Method2()
{
// Important logic performed here!
}
}
I can imagine a scenario where multiple derived classes override a method and one or more mistakenly failed to call the base method, which might result in unexpected behavior. Or situations where there are unnecessary calls to an empty base method, which may be wasteful, but unlikely to break anything.
While I'm primarily inquiring about whether such attributes or code inspection comments exist, I am also curious to know of how people might handle these situations, such as simply always calling the base method from an override, keeping important logic out of base virtual methods, or using some other method of communicating whether a base method call is unnecessary or required.
Generate a compiler or code inspection warning when calling a base
method that is empty.
In c#, as far as I know, there is no warning for an empty method. So, I think there is no warning when calling a base method that is empty.
But you are free to write one for you: Write your first analyzer and code fix
Generate a compiler or code inspection warning when not calling a base
method that is not empty.
Not in C#, and I think is not a good idea to force a derived class to call a base method. I can understand that in your scenario, it would be great if all your derived classes method call always the base method, but it will be a very uncommon case. And generally when we need tricky (not intuitive) rules, that means our solution is not very clear, or it will be error-prone.
keeping important logic out of base virtual methods
If you need A.Method1 to be called, maybe let it as a virtual method is not a good idea. You have a virtual method when you want to give to your derived classes the opportunity to use it OR to overwrite it with a more adapted version.
I propose you a solution that maybe you can adapt to your scenario.
abstract class A
{
public abstract void Method1();
public virtual void Method2() { }
public void MustBeCalled()
{
// Here you can put the logic you had in Method1, you need to execute this code, so this method can't be overwrited.
}
public void TemplateMethod()
{
Method1();
MustBeCalled();
// Do something else ...
}
}

Why does self-invocation not work for Spring proxies (e.g. with AOP)?

Please explain, why self invocation on proxy performed on target but not proxy? If that made on purpose, then why? If proxies created by subclassing, it's possible to have some code executed before each method call, even on self invocation. I tried, and I have proxy on self invocation
public class DummyPrinter {
public void print1() {
System.out.println("print1");
}
public void print2() {
System.out.println("print2");
}
public void printBoth() {
print1();
print2();
}
}
public class PrinterProxy extends DummyPrinter {
#Override
public void print1() {
System.out.println("Before print1");
super.print1();
}
#Override
public void print2() {
System.out.println("Before print2");
super.print2();
}
#Override
public void printBoth() {
System.out.println("Before print both");
super.printBoth();
}
}
public class Main {
public static void main(String[] args) {
DummyPrinter p = new PrinterProxy();
p.printBoth();
}
}
Output:
Before print both
Before print1
print1
Before print2
print2
Here each method called on proxy. Why in documentation mentioned that AspectJ should be used in case of self invocation?
Please read this chapter in the Spring manual, then you will understand. Even the term "self-invocation" is used there. If you still do not understand, feel free to ask follow-up questions, as long as they are in context.
Update: Okay, now after we have established that you really read that chapter and after re-reading your question and analysing your code I see that the question is actually quite profound (I even upvoted it) and worth answering in more detail.
Your (false) assumption about how it works
Your misunderstanding is about how dynamic proxies work because they do not work as in your sample code. Let me add the object ID (hash code) to the log output for illustration to your own code:
package de.scrum_master.app;
public class DummyPrinter {
public void print1() {
System.out.println(this + " print1");
}
public void print2() {
System.out.println(this + " print2");
}
public void printBoth() {
print1();
print2();
}
}
package de.scrum_master.app;
public class PseudoPrinterProxy extends DummyPrinter {
#Override
public void print1() {
System.out.println(this + " Before print1");
super.print1();
}
#Override
public void print2() {
System.out.println(this + " Before print2");
super.print2();
}
#Override
public void printBoth() {
System.out.println(this + " Before print both");
super.printBoth();
}
public static void main(String[] args) {
new PseudoPrinterProxy().printBoth();
}
}
Console log:
de.scrum_master.app.PseudoPrinterProxy#59f95c5d Before print both
de.scrum_master.app.PseudoPrinterProxy#59f95c5d Before print1
de.scrum_master.app.PseudoPrinterProxy#59f95c5d print1
de.scrum_master.app.PseudoPrinterProxy#59f95c5d Before print2
de.scrum_master.app.PseudoPrinterProxy#59f95c5d print2
See? There is always the same object ID, which is no surprise. Self-invocation for your "proxy" (which is not really a proxy but a statically compiled subclass) works due to polymorphism. This is taken care of by the Java compiler.
How it really works
Now please remember we are talking about dynamic proxies here, i.e. subclasses and objects created during runtime:
JDK proxies work for classes implementing interfaces, which means that classes implementing those interfaces are being created during runtime. In this case there is no superclass anyway, which also explains why it only works for public methods: interfaces only have public methods.
CGLIB proxies also work for classes not implementing any interfaces and thus also work for protected and package-scoped methods (not private ones though because you cannot override those, thus the term private).
The crucial point, though, is that in both of the above cases the original object already (and still) exists when the proxies are created, thus there is no such thing as polymorphism. The situation is that we have a dynamically created proxy object delegating to the original object, i.e. we have two objects: a proxy and a delegate.
I want to illustrate it like this:
package de.scrum_master.app;
public class DelegatingPrinterProxy extends DummyPrinter {
DummyPrinter delegate;
public DelegatingPrinterProxy(DummyPrinter delegate) {
this.delegate = delegate;
}
#Override
public void print1() {
System.out.println(this + " Before print1");
delegate.print1();
}
#Override
public void print2() {
System.out.println(this + " Before print2");
delegate.print2();
}
#Override
public void printBoth() {
System.out.println(this + " Before print both");
delegate.printBoth();
}
public static void main(String[] args) {
new DelegatingPrinterProxy(new DummyPrinter()).printBoth();
}
}
See the difference? Consequently the console log changes to:
de.scrum_master.app.DelegatingPrinterProxy#59f95c5d Before print both
de.scrum_master.app.DummyPrinter#5c8da962 print1
de.scrum_master.app.DummyPrinter#5c8da962 print2
This is the behaviour you see with Spring AOP or other parts of Spring using dynamic proxies or even non-Spring applications using JDK or CGLIB proxies in general.
Is this a feature or a limitation? I as an AspectJ (not Spring AOP) user think it is a limitation. Maybe someone else might think it is a feature because due to the way proxy usage is implemented in Spring you can in principle (un-)register aspect advices or interceptors dynamically during runtime, i.e. you have one proxy per original object (delegate), but for each proxy there is a dynamic list of interceptors called before and/or after calling the delegate's original method. This can be a nice thing in very dynamic environments. I have no idea how often you might want to use that. But in AspectJ you also have the if() pointcut designator with which you can determine during runtime whether to apply certain advices (AOP language for interceptors) or not.
Solutions
What you can do in order to solve the problem is:
Switch to native AspectJ, using load-time weaving as described in the Spring manual. Alternatively, you can also use compile-time weaving, e.g. via AspectJ Maven plugin.
If you want to stick with Spring AOP, you need to make your bean proxy-aware, i.e. indirectly also AOP-aware, which is less than ideal from a design point of view. I do not recommend it, but it is easy enough to implement: Simply self-inject a reference to the component, e.g. #Autowired MyComponent INSTANCE and then always call methods using that bean instance: INSTANCE.internalMethod(). This way, all calls will go through proxies and Spring AOP aspects get triggered.
#Dolphin
It is late reply but maybe this text will help you: Spring AOP and self-invocation.
In short, the link leads you to a simple example of why running code from another class will work but from "self" it won't.
Notice looking at the example from the link that when you run code from another class you are asking Spring to inject the bean. And Spring sees that in the bean you are asking for a cache, it creates at runtime proxy for that bean.
On the other hand, when you do the same in the "self" class, you create a compile time method call and Spring won't do anything about it.

Using #SubscribeMapping annotated method for RPC-like behavior when return value is deferred

I really like #SubscribeMapping approach to implement RPC-like semantic with STOMP-over-Websocket.
Unfortunately its "magic" requires that annotated method returns a value. But what if return value is not readily available? I want to avoid blocking inside the method waiting for it. Instead I'd like to pass a callback that will publish a value when it's ready. I thought I could use messaging template's convertAndSendToUser() inside a callback to do that. Turns out #SubscribeMapping handling is quite special and is not possible with instance of SimpMessageSendingOperations.
I was able to achieve my goal by calling handleReturnValue() on a SubscriptionMethodReturnValueHandler, but the overall mechanics of this is very tedious if not hackish (like providing dummy instance of MethodParameter to handleReturnValue()):
public class MessageController {
private final SubscriptionMethodReturnValueHandler subscriptionMethodReturnValueHandler;
#Autowired
public MessageController(SimpAnnotationMethodMessageHandler annotationMethodMessageHandler) {
SubscriptionMethodReturnValueHandler subscriptionMethodReturnValueHandler = null;
for (HandlerMethodReturnValueHandler returnValueHandler : annotationMethodMessageHandler.getReturnValueHandlers()) {
if (returnValueHandler instanceof SubscriptionMethodReturnValueHandler) {
subscriptionMethodReturnValueHandler = (SubscriptionMethodReturnValueHandler) returnValueHandler;
break;
}
}
this.subscriptionMethodReturnValueHandler = subscriptionMethodReturnValueHandler;
}
#SubscribeMapping("/greeting/{name}")
public void greet(#DestinationVariable String name, Message<?> message) throws Exception {
subscriptionMethodReturnValueHandler.handleReturnValue("Hello " + name, new MethodParameter(Object.class.getMethods()[0], -1), message);
}
}
So my question is simple: Is there a better way?

Visual Studio code generated when choosing to explicitly implement interface

Sorry for the vague title, but I'm not sure what this is called.
Say I add IDisposable to my class, Visual Studio can create the method stub for me. But it creates the stub like:
void IDisposable.Dispose()
I don't follow what this syntax is doing. Why do it like this instead of public void Dispose()?
And with the first syntax, I couldn't work out how to call Dispose() from within my class (in my destructor).
When you implement an interface member explicitly, which is what the generated code is doing, you can't access the member through the class instance. Instead you have to call it through an instance of the interface. For example:
class MyClass : IDisposable
{
void IDisposable.Dispose()
{
// Do Stuff
}
~MyClass()
{
IDisposable me = (IDisposable)this;
me.Dispose();
}
}
This enables you to implement two interfaces with a member of the same name and explicitly call either member independently.
interface IExplict1
{
string InterfaceName();
}
interface IExplict2
{
string InterfaceName();
}
class MyClass : IExplict1, IExplict2
{
string IExplict1.InterfaceName()
{
return "IExplicit1";
}
string IExplict2.InterfaceName()
{
return "IExplicit2";
}
}
public static void Main()
{
MyClass myInstance = new MyClass();
Console.WriteLine( ((IExplcit1)myInstance).InstanceName() ); // outputs "IExplicit1"
IExplicit2 myExplicit2Instance = (IExplicit2)myInstance;
Console.WriteLine( myExplicit2Instance.InstanceName() ); // outputs "IExplicit2"
}
Visual studio gives you two options:
Implement
Implement explicit
You normally choose the first one (non-explicit): which gives you the behaviour you want.
The "explicit" option is useful if you inherit the same method from two different interfaces, i.e multiple inheritance (which isn't usually).
Members of an interface type are always public. Which requires their method implementation to be public as well. This doesn't compile for example:
interface IFoo { void Bar(); }
class Baz : IFoo {
private void Bar() { } // CS0737
}
Explicit interface implementation provides a syntax that allows the method to be private:
class Baz : IFoo {
void IFoo.Bar() { } // No error
}
A classic use for this is to hide the implementation of a base interface type. IEnumerable<> would be a very good example:
class Baz : IEnumerable<Foo> {
public IEnumerator<Foo> GetEnumerator() {}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { }
}
Note how the generic version is accessible, the non-generic version is hidden. That both discourages its use and avoids a compile error because of a duplicate method.
In your case, implementing Dispose() explicitly is wrong. You wrote Dispose() to allow the client code to call it, forcing it to cast to IDisposable to make the call doesn't make sense.
Also, calling Dispose() from a finalizer is a code smell. The standard pattern is to add a protected Dispose(bool disposing) method to your class.

Resources