I have created a eclipse plugin and converted it to maven,which needs the dependency of antlr but when the plugin execute it says it cant find the required package. Then i came to know anltr is not osgi bundle. any one please tell me how to convert the antlr jar file into an osgi bundle.? The antlr dependency must support my mvenized eclipse plugin
The main ANTLR 4 project doesn't support this (see issue #689). However, I've recently created an independent fork of the project which aims to target a number of issues related to the use of ANTLR 4 in large(r)-scale and/or performance-critical applications. One of the items I'd like to implement is using OSGi for improved runtime versioning instead of the manual mechanism currently in place. I recommend filing an issue with this fork of the project so I can include these changes as part of my initial release.
https://github.com/tunnelvisionlabs/antlr4/issues
Related
So I have been struggling all afternoon with getting some Gradle build to work for a Kotlin multiplatform project that involves an ANTLR grammar. What I'm trying to do is to have a parser generated by ANTLR from a shared grammar for both a Kotlin (or Java if that doesn't work) and JavaScript target. Based on this I'd like to write some library around the parser that can be used from the JVM and JavaScript.
So I have set up a Kotlin multi-platform project because that seemed like a nice way of killing two birds with one stone (here is a repo https://github.com/derkork/project.txt). I created a source set commonAntlr where I placed the grammar file under commonAntlr/antlr/project_txt.g4. According to the documentation of the ANTLR plugin this is how stuff should be set up. I also apply the antlr plugin at the top (here is the build.gradle.kts - https://github.com/derkork/project.txt/blob/master/build.gradle.kts).
Now I run gradle build in the hopes that the ANTLR plugin will at least try to generate some nice Java code for me from the grammar using the default settings. Alas, it does not. The ANTLR plugin does not even get started, which is what I can tell from the output. The build later fails with some obscure JavaScript problem, but that looks unrelated and I'd like to skip over it for now.
Now my Gradle-foo isn't exactly strong (I have only used it in extremely simple projects, most of my experience is in Maven), and I have the distinct feeling I'm missing something here. However the documentation of the plugin just says
To use the ANTLR plugin, include the following in your build script:
plugins {
antlr
}
Which I did. Since I get zero output, I have a feeling that I need to do a bit more for this to work. I have read a lot of the Gradle documentation up and down to find out how plugins work in general and I found that they add tasks to the build and also add some dependencies so that tasks are invoked when you try to build certain things. However I don't really understand how plugins work together with source sets and how you can tell Gradle "would you please run the generateGrammarSource task for this source set" (or if it even works like this).
So if some of the Gradle gods could enlighten me on this, this would be much appreciated :)
I have met a similar issue: https://gitlab.com/pika-lab/tuprolog/2p-in-kotlin/tree/feature/parser
My solution -- which is still a work in progress -- consists in decomposing the problem.
I reasonable solution in my opinion is to create a Kotlin/JVM project (say parser-jvm) where to put the generated Java code + any JVM specific facility, and a Kotlin/JS project where to put the generated JS code + any JS specific facility (say parser-js). The next step is to create a Kotlin/MPP (say parser-common) project whose JVM implementation depends on parser-jvm and whose JS implementation depends on parser-js.
My approach is actually working for JVM while I'm experiencing some issues for JS, which are mostly caused by this issue.
The main drawback of this approach is that some Gradle coding is required to setup ANTLR with Kotlin/JS. I already faced this problem in my build.gradle and I'm quite satisfied with the result and the overall architecture of the project. However, I believe that my proposal is far less troublesome than configuring a Kotlin/MPP project to work with ANTLR.
I am currently starting my adventure with Maven, and I actually don't understand the idea behind using it to automate compilation of my source code. For the time being I am working on small projects with up to 15-20 classes, and 1 main method in the "app" class. Could someone please give me the explanation with examples, when it's necesarry (or recommended) to use build automatation tool to compile the source code and how could I benefit from using it regarding source code compilation?
Thank you very much in advance!
I was looking for different answers and I have a lot of work to do but since I've seen this question, as a Maven fanboy, I couldn't resist anymore and this below is my answer.
First of all, I agree with JF Meier which answered before me, but I think the answer can be improved.
IMO you have to consider Maven not just as a build tool, but as a multi-purpose tool which can help you to do very different things. The best 3, for me are:
Compiler. Obviously. Maven allows you to easily compile giant projects with a lot of submodules, even if some of these modules are interdependent one with each other.
Dependency and repository manager. Maven allows you to automatically download third party software and bind this downlaod to the build. This is immediately understandable if you think to framework or api dependencies from big corps (Apache found., Spark, Spring, Hibernate and so on ...) but it's really powerful in every enterprise context.
Example: you have a Maven project (let's say project A) which manages requests coming from a webservice and provides responses. This Maven project relys on another Maven project (let's say project B) which actually generates webservice jar and uploads it to a company repository. Well, when you have to add a field or a method to the webservice you just have to implements new software in project B, upload it the repo and change the version in Maven poms in both project A and B. VoilĂ : now EVERY developer of the company just have to "mvn clean install" project A to have the new version.
Sources and code automatic generator. Since Maven 2.x are available a lot of plugins (from Apache found. and others) which allow you to generate code and sources (tipically xml files) starting from little to none implementations.
Example 1: CXF plugin is commonly used to generate java classes from xml or xsd files.
Example 2: JAXWS plugin is commonly used to generate wsdl from SOAP webservice implementations or implementation starting from wsdl file.
Do you feel the power now?
-Andrea
The question is not very specific, but I will try to answer.
Usually, you want your source code to end up in a jar or war, so that you can use it as a library or run it somewhere (e.g. on an application server).
Maven not only compiles the classes you have and creates the final artifact (jar, war), but also handles your dependencies, e.g. the libraries your project depends upon.
Java 9 is scheduled to be released soon (July 27). Are there any plans to release a Java 9 compliant version of Spring projects that will be modular (Java 9 project Jigsaw)?
Spring 5, the next major version of Spring, won't be modular. However you can use Spring 5 jars/artifacts as automatic modules in your module-info files. See official blogpost and What's new annoucement.
Conserning module-info.java see Declare Spring modules with JDK 9 module metadata issue last commit:
This issue is marked as "General Backlog", indicating that we won't deal with it for 5.1 (otherwise it'd be marked for 5.1 GA still) and probably not in subsequent 5.x releases either (otherwise it'd be marked as "5.x Backlog").
Specifically, we can't ship module-info files quite yet since we'd need stable module names for all of our optional dependencies... and many of those don't declare stable module names at this point (that is, they don't even include an Automatic-Module-Name manifest entry in their jar). Also, we'd need to build the entire framework on JDK 9+ for the compiler to understand the module-info.java format which is not entirely trivial either, even if the framework itself is known to work fine with JDK 9/10/11 at runtime.
All in all, my prediction about module-info files for 5.1 turned out to be too ambitious. Our current focus is on general JDK 11 compatibility (SPR-16391) on the classpath and as automatic modules on the module path, as well as GraalVM compatibility (SPR-16991). The use of jlink requires manual addition of module-info.class files to the framework jars for the time being... which might stay that way for several years still until we ship a JDK 11 baselined Spring Framework 6.0 against a new generation of dependencies.
Update For 11/3/2022
from Juergen Hoeller in #18079
Our strategic alignment with the module system has been in competition with our AOT and GraalVM native image efforts in 6.0, so we unfortunately had no chance to experiment with a build migration to full module descriptors yet. There have been very few requests for it even in the course of this year, so we wonder whether there is much practical value to be uncovered here for the time being anyway. Looking forward, the use of jlink's module-bounded approach for application/framework-level modules might get superseded by runtime images based on GraalVM-style individual reachability analysis in the long run.
That said, OpenJDK's Project Leyden aims to reuse module system concepts and tools for its standardized static image approach, so deeper module system alignment remains part of our long-term technology strategy for the Spring Framework 6.x generation.
I found this note in the Maven's documentation:
You can add elements to this classloader by extensions. These are loaded into the same place as ${maven.home}/lib and hence are available to the Maven core and all plugins for the current project and subsequent projects (in future, we plan to remove it from subsequent projects).
I couldn't understand what they mean by "subsequent projects" here. As far as I understand, extensions are enhancements to lifecycle phases of Maven and are not project specific. So it makes sense to work for all the Maven projects.
Question: Can anyone explain what this statement means "in future, we plan to remove it from subsequent projects"
First an extensions can be extensions of a life cycle but not need to. You can implement an extensions also as an EventSpy for example.
This documentation is related to the Core Classloader which is available within such extensions and makes it also possible to enhance it via an extensions. This classloader contains those files from ${maven.home}/lib which is not a good idea and not necessary. It would be better having only the Maven Plugin API there and it's instances which are currently used and not more...
There existing some extensions like Wagon which are using to make a transport in special cases possible which could be project specific.
Starting with Maven 3.3.1 the core extensions mechanism has been improved to make loading project specific extensions more simpler which means they are located into ${maven.projectBasedir}/.mvn/extensions.xml file and also being loading from an repository. Before 3.3.1 you need to do that manually via mvn -Dmaven.ext.class.path=extension.jar.
We have a very big web application containing many features.Now for maintainability we want to split the application in components so that can remove / add particular components (jars). For that one suggestion is coming is to use OSGI. I think converting jars into bundle will take huge effort. I think same functionality can be achieved by Maven. According to my understanding OSGI is packaging tool. If I can make Maven plug-in for each component then any particular component can be included or removed at compile as opposed to run time as in case of OSGI.
Modularizing the application using Maven will be simpler than OSGI. I have read similar post on this site and it commented that OSGI and Maven are like comparing apple with orange. But I think in one sense both are same as they both meant for packaging difference is one is used at run time and one for compile time
Looking forward for well though answer :)
best wishes
Shailesh
As you already hinted at yourself: you're comparing apple with orange.
OSGi is not a packaging tool.
OSGi bundles are plain JAR files with some OSGi-specific metadata in the Manifest file.
You can create OSGi bundles using Maven e.g. using the Maven Bundle Plugin (I can recommend this approach). So regardless if you're using OSGi or not I strongly recommend using Maven.
Here some use cases for OSGi:
You want to create different versions of your application e.g. for different customers. With OSGi you can just add/ remove bundles without having to touch any other configuration.
You need a plugin system so 3rd parties can provide plugins to your application
You want your application to be truely modular
You want to share some code with other applications but want to hide some internal classes
...
OSGI is much much more than a packaging tool. You could say that OSGI has a packaging tool inside. Maven is a packaging tool and a dependency manager. I'd say that, given the level of complexity and the use you say you'll make of this technology, go with Maven.