What is the Hidden Classes in java 15? - spring

What exactly is the Hidden Classes feature in Java 15 ?
As i read in Java 15 documentation , Spring & Hibernate frameworks used Hidden Classes ? I need real example to Spring and/or Hibernate used Hidden Classes .

To answer this question, it's important to first distinguish the Java language from the Java runtime. The former is the "Java code" a programmer writes, while the latter is the "Java program" people use to run code written in that language (among other languages, like Kotlin and Clojure).
In the Java language, there are many ways to define a class. Most classes are like ArrayList, which are top-level and have a programmer-defined name. But other ways of defining a class may not have a simple name. An anonymous inner class, for example, does not provide a way for a programmer to give it a name. The same goes for lambda expressions introduced in Java 8.
In the past, the Java runtime has had the limitation that all classes must have a name and must be publicly addressable by the runtime. This has meant that the Java compiler gives an anonymous inner class a name unlikely to conflict with any other class's name, usually with dollar signs, which are legal class names to the Java runtime, but illegal class names in the Java language. This is an implementation detail. But because classes all have names and are all addressable through the class loader, abstraction is "leaky"; there are ways these classes which may be meant to be hidden can be addressed through access to the class loader.
"Hidden classes" are a new (to Java 15) feature of the Java runtime, a way for programs to define classes that cannot be addressed by other classes running on the class loader. They still have names, but access is scoped in a way that their existence cannot be "leaked" to other parts of the program. It is not a new language feature, but a tool that a compiler (or runtime framework) may use to implement certain pre-existing language features.
To the typical Java programmer, this process is transparent. Unless your program compiles code or manipulates bytecode, you do not need to worry about this implementation detail. But for those who do, this is a valuable tool in their software toolbox.

Related

OSGi: when to use component framework and when to create objects yourself

I've been an AEM developer for almost a year now. I know AEM uses 'Declarative Services component framework' to manage life cycle of OSGi components.
Consider a scenario when i would export a package from a bundle and import that package from another bundle, i could create objects of classes in first bundle inside second bundle as well. it's a import-export contract in this case.
my question is when i should be using component framework to manage the lifecycle of my objects and when to handle it myself by creating them when required.
In an ideal design, you would NOT in fact be able to create objects from the exported package; because that package would contain only interfaces. This makes it a "pure" contract (API) export. If there are classes in there that you can directly instantiate, then they are implementation classes.
In general it is far better to export only pure APIs and to keep implementation classes hidden. There are two main reasons:
Implementation classes tend to have downstream dependencies. If you depend directly from implementation class to implementation class then you get a very large and fragile dependency graph... and eventually that graph will contain a cycle. In fact it's almost inevitable that it will. At that point, your application is not modular because you cannot deploy or change any part of it independently.
Pure interfaces can be analysed for compatibility between versions. As a consumer or a provider of an API, you know exactly which versions of the API you can support because the API does not contain executable code. However if you have a dependency onto an implementation class, then you never really know when they break compatibility because the breakage could happen deep down in executable code that you can't easily analyse.
If your objects are services then there's no question, they have to be OSGi components.
For other things, my first choice is OSGi components, unless they're trivial objects like data holders or something similar.
If an object requires configuration or refers to OSGi services then it's also clearly an OSGi component.
In general, it's best IMO to think in services and define your package exports as the minimum that allows other bundles to use a bundle's services. Unless a bundle is clearly a reusable library like commons-io (to take a simple example).

Frameworks and specification for JPA, CDI and JSF

I have read and I have understood that JPA, JSF, CDI are only specifications. Such as:
JPA - [Hibernate, Toplink], JSR-000338
CDI - [Spring - Google Guice, PicoContainer], JSR299
JSF - [Primefaces, IceFaces, RichFaces], JSR-000314
So, if they are only specification in a paper, why the packages says, "javax.persistence...", “javax.faces."?
I think that Oracle says: Here is this paper where are the rules. If you want to implement it you must to use my package name ("javax.persistence...", " javax.faces."), and as pay, you can add more features and you will can to put your own package for the extended features?
Other thing, if I study the specifications (jpa, cdi, jsf), Will I be able to use whatever framework? Or even, to construct my software without them?
Please explain me that.
Best regards.
First of all, neither Spring, nor Guice (and not PicoContainer either, AFAIK) are CDI implementations. JBoss Weld is a CDI implementation.
Second. The specification is not just a paper. It's also a set of interfaces and classes that every implementation must correctly implement or extend or which even contain core functionalities that don't depend on the implementation (see Persistence, for example). Those interfaces and classes are the ones that are in the javax package. They're part of the specification itself, and implementations may not modify them.
The idea of a standard is indeed that by relying on the rules described in the specifications, you should be able to use whatever framework implementing the specifications. Beware, though, that some parts are sometimes left unspecified, and that implementations, even without bugs, can do some things differently.
Implementing your software without an implementation would, theoretically, be possible, as long as the user of your software chooses the implementation it wants. But that is extremely unrealistic: you will have to test your software and thus choose an implementation for your tests. And if you plan on supporting several implementations of the specification, you will have to test your software wilth all of them, and even probably need to make adjustments.

Why do people cite the ability to "define module boundaries" as an advantage of OSGI?

Don't the default Java access modifiers (public, protected, private) already define boundaries for how classes can be accessed. Why are these modifiers not sufficient? Why is OSGI's way of "defining moudle boundaries" better than this?
Yes, the java access modifier's define a classes boundaries and to some extent a package's boundaries but a module is larger than a single class or package. You may want to see http://www.slideshare.net/bjhargrave/why-osgi which explains the progression of encapsulation through classes and onto modules.
Short answer
In a modularized system it is very important to separate API from implementation where only API is exported. You cannot do that based on class modifiers. Other very important part of OSGi is the versioning of packages. You have to assign version only to those packages that are exported.
Long answer
A more prcise answer to this question is available at the following wiki post that was written by Neil Bartlett: http://wiki.osgi.org/wiki/Export_Only_APIs
Similar question
Why do we need object-orientation when functions are already available in structured languages? Are not the functions used to separate logical units of an algorithm?
I thought about it a little bit, and realized that there are certain privacy restrictions that OSGI's export mechanism can impose that plain old Java access modifiers cannot. See the diagrams below.
Notice how in Plain Old Java, a public class is visible (indicated by a green arrow) to all classes no matter what. In OSGI, a public class is visible to all classes (including classes in another bundle) ONLY if it is part of an exported package.
Note:The "protected classes" in the diagram are really just classes without any modifier (since there is no "protected" modifier for classes, just for fields and methods)
Edit: I'm adding this relevant quote from here: http://njbartlett.name/files/osgibook_preview_20091217.pdf
"A public class is visible to every class in every other package; a default access
class is only available to other classes within the same package.
There is something missing here. The above access modifiers relate to visibility
across packages, but the unit of deployment in Java is not a package, it is a
JAR file. Most JAR files offering non-trivial APIs contain more than one
package (HttpClient has eight), and generally the classes within the JAR need
to have access to the classes in other packages of the same JAR. Unfortunately
that means we must make most of our classes public, because that is the only
access modifier which makes classes visible across package boundaries.
As a consequence, all those classes declared public are accessible to clients
outside the JAR as well. Therefore the whole JAR is effectively public API,
even the parts that we would prefer to keep hidden. This is another symptom
of the lack of any runtime representation for JAR files."

Using the OSGi LogService in a real world application

What is the proper way to to use the OSGi LogService in a real world application? At first I figured I would use Declarative Services to create components if a class needs to log something. However, this means you have to declare a service component for just about every single class which seems like overkill and a pain to maintain. This is especially true when you need to make most of the classes into component factories that are just helper classes.
Also, this doesn't work very well for abstract classes and non-final classes as the developer extending the class has to make sure he/she declares a component with the same references as the base class. This is especially true in the system I'm developing that essentially provides a library containing abstract classes that other developers will use.
The current solution is providing static log methods that use a static instance of a LogService reference. However, the LogService provider treats all log messages as coming from the bundle that contains the static log class.
In OSGi (as in any environment) you want to stay away from static helpers as much as you can, so, the static log method solutions is not the best way to go here. As you run in an OSGi environment, you will want to use the LogService as a central, bundle- and service-aware conduit for all your logging. There are two cases to consider.
Legacy and library code
If the code you use needs logging functionality, but is not OSGi aware, you can build (or find) bridges to the LogService.
Code under your control
Assuming that all code under your control is supposed to be service-aware, it should use the LogService directly. For most components this is easy, but some cases need additional consideration.
For abstract classes, it all depends on what you use them for.
Are they base classes that help you with OSGi details? Then declarative services may not be your best bet, you could look into other dependency management mechanisms that handle inheritance differently.
Do they provide non-OSGi aware base functionality? This case should be no problem, as your concrete subclass will be registered as a component.
We all run into situation where library code seems to need logging; however, ask yourself whether it really does. Very generic code can rarely know what it should log. If it knows enough about your situation, it probably should be located in a component, delegating the details to the actual library code. For exceptional situations that warrant logging, you should probably use exceptions.
Do you really need to log from non-service-aware code? You can pass in a LogService to the helper methods (so at least we know on who's behalf this code is executing).
A special case to consider are long-running operations that are not OSGi-aware: if you give a service reference to, for instance, a worker thread which may run a very long time, you're asking for trouble, not only for logging.

Dependency injection, Scala and Spring

I love the concept of DI and loosely coupled system, a lot. However, I found tooling in Spring lacking at best. For example, it's hard to do "refactoring", e.g. to change a name of a bean declared in Spring. I'm new to Spring, so I would be missing something. There is no compiling time check etc.
My question is why do we want to use XML to store the configuration? IMO, the whole idea of Spring (IoC part) is to force certain creational pattern. In the world of gang-of-four patterns, design patterns are informative. Spring (and other DIs) on the other hand, provides very prescribed way how an application should be hooked up with individual components.
I have put Scala in the title as well as I'm learning it. How do you guys think to create a domain language (something like the actor library) for dependency ingestion. Writing the actual injection code in Scala itself, you get all the goodies and tooling that comes with it. Although application developers might as well bypass your framework, I would think it's relatively easy to standard, such as the main web site/app will only load components of certain pattern.
There's a good article on using Scala together with Spring and Hibernate here.
About your question: you actually can use annotations. It has some advantages. XML, in turn, is good beacause you don't need to recompile files, that contain your injection configs.
There is an ongoing debate if Scala needs DI. There are several ways to "do it yourself", but often this easy setup is sufficient:
//the class that needs injection
abstract class Foo {
val injectMe:String
def hello = println("Hello " + injectMe)
}
//The "binding module"
trait Binder {
def createFooInstance:Foo
}
object BinderImpl extends Binder {
trait FooInjector {
val injectMe = "DI!"
}
def createFooInstance:Foo = new Foo with FooInjector
}
//The client
val binder:Binder = getSomehowTheRightBinderImpl //one way would be a ServiceLoader
val foo = binder.createFooInstance
foo.hello
//--> Hello DI!
For other versions, look here for example.
I love the concept of DI and loosely
coupled system, a lot. However, I
found tooling in Spring lacking at
best. For example, it's hard to do
"refactoring", e.g. to change a name
of a bean declared in Spring. I'm new
to Spring, so I would be missing
something. There is no compiling time
check etc.
You need a smarter IDE. IntelliJ from JetBrains allows refactoring, renaming, etc. with full knowledge of your Spring configuration and your classes.
My question is why do we want to use
XML to store the configuration?
Why not? You have to put it somewhere. Now you have a choice: XML or annotations.
IMO,
the whole idea of Spring (IoC part) is
to force certain creational pattern.
In the world of gang-of-four patterns,
design patterns are informative.
ApplicationContext is nothing more than a big object factory/builder. That's a GoF pattern.
Spring (and other DIs) on the other
hand, provides very prescribed way how
an application should be hooked up
with individual components.
GoF is even more prescriptive: You have to build it into objects or externalize it into configuration. Spring externalizes it.
I have put Scala in the title as well
as I'm learning it. How do you guys
think to create a domain language
(something like the actor library) for
dependency ingestion.
You must mean "injection".
Writing the
actual injection code in Scala itself,
you get all the goodies and tooling
that comes with it.
Don't see what that will buy me over and above what Spring gives me now.
Although
application developers might as well
bypass your framework, I would think
it's relatively easy to standard, such
as the main web site/app will only
load components of certain pattern.
Sorry, I'm not buying your idea. I'd rather use Spring.
But there's no reason why you shouldn't try it and see if you can become more successful than Spring. Let us know how you do.
There are different approaches to DI in java, and not all of them are necessarily based on xml.
Spring
Spring provides a complete container implementation and integration with many services (transactions, jndi, persistence, datasources, mvc, scheduling,...) and can actually be better defined using java annotations.
Its popularity stems from the number of services that the platform integrates, other than DI (many people use it as an alternative to Java EE, which is actually following spring path starting from its 5 edition).
XML was the original choice for spring, because it was the de-facto java configuration standard when the framework came to be. Annotations is the popular choice right now.
As a personal aside, conceptually I'm not a huge fan of annotation-based DI, for me it creates a tight coupling of configuration and code, thus defeating the underlying original purpose of DI.
There are other DI implementation around that support alternative configuration declaration: AFAIK Google Guice is one of those allowing for programmatic configuration as well.
DI and Scala
There are alternative solutions for DI in scala, in addition to using the known java frameworks (which as far as I know integrate fairly well).
For me the most interesting that maintain a familiar approach to java is subcut.
It strikes a nice balance between google guice and one of the most well-known DI patterns allowed by the specific design of the scala language: the Cake Pattern. You can find many blog posts and videos about this pattern with a google search.
Another solution available in scala is using the Reader Monad, which is already an established pattern for dynamic configuration in Haskell and is explained fairly well in this video from NE Scala Symposium 2012 and in this video and related slides.
The latter choice goes with the warning that it involves a decent level of familiarity with the concept of Monads in general and in scala, and often aroused some debate around its conceptual complexity and practical usefulness. This related thread on the scala-debate ML can be quite useful to have a better grip on the subject.
i can't really comment on scala, but DI helps enforce loose coupling. It makes refactoring large apps soooo much easier. If you don't like a component, just swap it out. Need another implementation for a particular environment, easy just plug in a new component.
I agree! To me he way most people use Spring is a mild version of hell.
When you look at the standard Springified code there are interfaces everywhere, and you never really know what class is used to implement an interface. You have to look into some fantastic configuration to find that out. Easy read = not. To make this code surfable you need a very advanced IDE, like Intelly J.
How did we end up in this mess? I blame automated unit testing! If you want to connect mocks to each and every class you can not have dependencies. If it wasn't for unit testing we could probable do just as well without loose coupling, since we do not want the customer to replace single classes willy nilly in our Jars.
In Scala you can use patterns, like the "Cake Patten" to implement DI without a framework. You can also use structural typing to do this. The result is still messy compared to the original code.
Personally I think one should consider doing automated testing on modules instead of classes to escape this mess, and use DI to decouple entire modules. This strategy is by definition not unit testing. I think most of the logic lies in the actual connections between classes, so IMHO one will benefit more from module testing than unit testing.
I cannot agree that XML is problem in Java and Spring:
I use Spring and Java extensively without to much XML because most configuration is done with annotations (type and name is powerful contract) - it looks really nice. Only for 10% cases I use XML because it is easier to do it in XML than code special solution with factories / new classes / annotations. This approach was inspired by Guice and Spring from 3.0 implements its as JSR-330 (but even that I use Spring 2.5 with spring factory configured with JSR-330 annotations instead of default spring-specific #Autowired).
Probably scala can provide better syntax for developing in DI style and I'm looking at it now (pointed Cake Pattern).

Resources