I would like to use SpringLDAP to do some simple username/password verification for authentication purposes. WHile the actual jar file is quite small (less than 1 meg) it seems to have a lot of dependencies as listed by link text.
By alot i mean it seems to suck in over 50 things many which dont seem right such as spring-jdbc as I dont want any jdbc and only the ldap template class and its bare dependencies. Without wasting too much time is it possible to the spring-ldap with only a bare minimum number of dependencies which amount to something like:
spring core
spring ldap
whatever logging deps they require.
spring tx
I dont see or appreciate why the rest of thes tuff is reuqired and was wondering can anyone verify they arent really needed in the end if one sticks to the basics. The other stuff i am referring too include:
spring-orm // no jdbc
beans // i dont want ioc.
spring-aop // no need for aop.
I intend to wire up the beans i will be using manually. I dont want more crap in there for what ammounts to setting a few properties, and want confirmation that I dont need what is probably there just to do the ioc stuff when all i want is the ldap stuff.
At lot of the things that are sucked in are transitive dependencies - dependencies of the things that spring-ldap relies upon. You can explicity exclude these when declaring your dependencies using the exclusions tag in the dependency.
<dependency>
<groupId>org.springframework.ldap</groupId>
<artifactId>spring-ldap</artifactId>
<version>1.3.1.RELEASE</version>
<exclusions>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring-jdbc</artifactId>
</exclusion>
<!-- other exclusions here -->
</exclusions>
</dependency>
Related
I am trying to using slf4j and log4j together. After some googling, I found some solutions:
How does simply adding slf4j to the pom.xml wrap log4j?
https://dzone.com/articles/adding-slf4j-your-maven
How to get SLF4J "Hello World" working with log4j?
Various names and versions of jars related to slf4j and log4j just confused me so much. slf4j-log4j12, log4j, log4j-core, log4j-over-slf4j, log4j-slf4j-impl, log4j-api, slf4j-impl, log4j12-api, log4j-to-slf4j ...... I can't even know the function of these different jars.
So, which combination and version should I choose?
The standard way to use SLF4J is that it's the main logging framework that you use. (You call methods defined within the slf4j-api.) It, in turn, uses a "binding" such as slf4j-log4j12 which tells it how to talk to the "real" logging framework. And then you also need to have the real logging framework on your classpath, such as "log4j" version 1.2.
Some newer logging libraries, such as Logback, are both the "binding" and the "real" framework, so if you want to use that as your logging framework, you only need logback-classic along with slf4j-api, so it's two libraries rather than three.
The confusing "over" and "to" libraries exist as a way to deal with the reality that you probably depend on libraries that want to log in a different way than the way you've selected for your application, but it's nice to have everything directed into one framework. So, if you're using SLF4J and Logback, but you're depending on a library which logs using Log4j 1.2, you want to include the log4j-over-slf4j library, which will "intercept" any Log4j calls within any libraries in your application and translate them to be logged by SLF4J instead. Conversely, if you're logging with Log4j 1.2 directly (without SLF4J) and need to call a library that's using SLF4J, you're going to want to include the slf4j-log4j12 library to intercept those calls and translate them to Log4j for you. There are a variety of these kinds of libraries, each to intercept and translate from one particular logging framework to another.
But your question was "So, which combination and version should I choose?", which is rather broad, as we're not sure what it is that you're trying to do. Selecting a logging framework is like any other technology framework decision, based on a lot of things like developer familiarity, what the systems you need to integrate with are using, and if there are any existing code or standards which one wants to stay consistent with. So, I'm going to try to avoid getting into that selection process too much, and answer your question about how to set up Maven to use SLF4J as your logging framework, backed by Log4j version 2:
Add a dependency for the current version of slf4j-api
Add a dependency for Log4j 2 and its SLF4J binding (From https://logging.apache.org/log4j/2.x/maven-artifacts.html):
<dependencies>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>2.6.1</version>
</dependency>
</dependencies>
Use <dependencyManagement> sections in your POM to ensure that all your dependencies use the same version of your logging framework. (For instance, many libraries will include slf4j-api as a dependency, but they may each use a different version.) Generally logging frameworks keep good compatibility between versions, so you generally want to override all the supplied dependency versions with the (usually newer) one that you're using.
If you have any libraries that are using other logging framework, use the appropriate interceptor bridge to redirect its logging, either one from Log4j 2 that will redirect it straight to Log4Jj 2, or one from SLF4J which will redirect to SLF4J, which will then be further directed to Log4j. (While it may seem to do redirect twice, it could make things easier if you were to keep SLF4J but change to another "real" logging framework at some point. Maybe.) For instance, if you have a library that uses commons-logging, you want to include jcl-over-slf4j instead.
Also, use the maven-enforcer-plugin's bannedDependencies rules to ensure that you're excluding any logging frameworks that you're not using that the libraries you're depending on are trying to bring into your project. That is, for that example I gave of a library you depend on that uses commons-logging, you need to <exclude> commons-logging from that library dependency, and add it to your bannedDependencies list to ensure that you don't accidentally get it again from some other library. Otherwise, you'll have both the "real" commons-logging as well as your fake bridge (that emulates the interface and translates to your real logging framework) on the classpath, and will run into trouble.
I hope that overview helps. Note I haven't actually tried running Log4j 2 in anything yet, and just got those dependencies from their documentation. Definitely test that everything's working the way you expect.
I want to use a library that has the following dependency:
<dependency>
<groupId>com.google.code.findbugs</groupId>
<artifactId>annotations</artifactId>
<version>2.0.3</version>
</dependency>
I read that FindBugs is for static analysis of Java code, so I though it isn't necessary to include in application. Is it safe to exclude the jar with <scope>provided</scope> or with an <exclusion>...</exclusion>?
One reason to exclude it is that there is a company policy against (L)GPL licence.
Yes, you can safely exclude this library. It contains only annotations which do not need to be present at runtime. Take care to have them available for the FindBugs analysis, though.
Note that you should also list jsr305.jar, like this:
<dependency>
<groupId>com.google.code.findbugs</groupId>
<artifactId>annotations</artifactId>
<version>3.0.2</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.google.code.findbugs</groupId>
<artifactId>jsr305</artifactId>
<version>3.0.2</version>
<scope>provided</scope>
</dependency>
Both JARs are required to make these annotations work.
Check the most recent findbugs version in Maven Central.
FindBugs is provided under the LGPL, so there should not be any problems for your company. Also, you are merely using FindBugs; you are not developing something derived from FindBugs.
In theory, it should be entirely safe (as defined in the OP's clarifying comment) to exclude the Findbugs transitive dependency. If used correctly, Findbugs should only be used when building the library, not using it. It's likely that someone forgot to add <scope>test</scope> to the Findbugs dependency.
So - go ahead and try the exclusion. Run the application. Do you get classpath errors, application functionality related to the library that doesn't work, or see messages in the logs that seem to be due to not having Findbugs available? If the answer is yes I personally would rethink using this particular library in my application, and would try to find an alternative.
Also, congratulations on doing the classpath check up front! As a general practice, it is a great idea to do what you have done every time you include a library in your application: add the library, then check what other transitive dependencies it brings, and do any necessary classpath clean-up at the start. When I do this I find it makes my debugging sessions much shorter.
I inherited a web service that used to work fine until we had to upgrade the runtime environment (from JBOSS/JRE6 to Tomcat7/JRE7). There was no code change except for the pom.xml!
In fact it still works fine, except that the many existing clients can no longer handle the response because of an extra namespace attribute now present in one of the elements (of the response).
That is, previously (before the migration) that element (in the SOAP response) used to be:
<OurResponse xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="OurResponse.xsdXMLSchema-instance"
ourresponseVersion="M1m2v03" xmlns="">
And now it is:
<v01:OurResponse acknowledgementVersion="M1m2v03"
xmlns:v01="http://webservice.ourdomain.com/projone/modtwo/M1m2v03">
Since there was no code change involved, I am baffled by this (minor but critical) change in the SOAP response.
In particular, I am trying to understand:
Which part of the build system changes this namespace attribute?
How do I restore it back to previous behavior?
Why would the clients break on such a minor change? (i.e. the content of the response is identical!)
The only relevant changes I have been able to spot in the pom.xml are:
Adding the following dependencies:
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcprov-jdk16</artifactId>
<version>1.46</version>
</dependency>
<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>3.0.7.RELEASE</version>
<exclusions>
<exclusion>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache</artifactId>
</exclusion>
</exclusions>
</dependency>
Updating the cxf-rt-frontend-jaxws dependency from version 2.2.7 to 2.7.7.
Updating the cxf-rt-transports-http dependency from version 2.2.7 to 2.7.7.
Updating the cxf-rt-ws-security dependency from version 2.2.7 to 2.7.7.
Adding the following dependencies:
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-core</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-databinding-aegis</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-management</artifactId>
<version>2.7.7</version>
</dependency>
Again, I am assuming there is some internal change in one of the frameworks involved (CXF? Spring?) that handles this internally. If this assumption is correct, then:
Which part of the build system changes this namespace attribute?
How do I restore it back to previous behavior?
Why would the clients break on such a minor change? (i.e. the content of the response is identical!)
Update 1:
The culprit turned out to be the org.apache.cxf packages version change from 2.2.7 to 2.7.7.
Looks like newer is not always better... unless there is a way to programmatically to force the legacy behavior of stripping out the namespace prefixes?
Update 2: Using CXF 2.2.7 on Tomcat7/JRE7 had the side-effect of killing the Tomcat server after sending a single SOAP message (seems to be related to SSL).
The fact that a venerable server like Tomcat can die due to a single rogue .war package is pretty disturbing but since I cannot fix Tomcat and I have not found a programmatic way to workaround the implicit namespace prefix issue, I tried various stable CXF releases that would exhibit the legacy behavior without killing Tomcat.
I tried versions 2.7.1 and 2.6.10 but eventually only 2.5.9 worked.
I hope this helps someone who stumbles on a similar problem.
Conforming XML implementation are not permitted to die due to changes in the use of xmlns attributes. Expressing the same data model with a prefix or without, it's the same thing. If your client failed, you need to fix the client. If you have clients that are hypersensitive to the use of namespace prefixes instead of to the real data model, CXF is not necessarily a good choice.
Most likely CXF upgraded to a more recent version of JAX-B, and it changed its mind about the namespace prefixes.
To elaborate this: Apache CXF was designed to focus on standard-conforming web services. Apache Axis has traditionally filled the space for not-so-standard-conforming web services, just fine. So no, the CXF development community has never worried about 'prefix stability'. If the XML is formally correct, CXF tests are happy.
For this, and many other reasons, CXF delegates XML generation for JAX-B web services to the official JAX-B reference implementation. New versions of CXF pick up new versions of JAX-B. JAX-B, from time to time, makes changes that have the effect of rearranging the namespace prefixes.
The XML generation in CXF is pluggable, so if you want to use an older JAX-B, or roll your own, you can. You can provide a 'Provider' and do the whole job yourself if you like.
There is an option in CXF to pass an object into JAX-B that decides what prefix to use for what namespace, but I don't think that it can be used to force a particular namespace to be defaulted. You might be able to get what you want with a Provider and a carefully configured call to the JAX-B API.
The CXF User mailing list archive has hundreds of messages to and from people who are swimming upstream with namespace prefixes.
(As for tomcat dying, well, that's another question.)
I have an Ancestor dependency has dependent scoped as provided, I need to propagate that scoping to anything that depends on my project.
For example, say I have SomeProjectA which depends on SomeLibraryB. I need to scope SomeLibraryB has provided.
Currently to compile anything that depends on SomeProjectA, has to also set SomeLibraryB has provided. I would rather propagate that scoping, then have any project that depend on mine deal with my project's dependents..
I dont think that is possible. Each project should declare provided dependencies on its own. Propagating that scope would be wrong since you would make an assumption about the deployment that you cant make since you are not responsible for the deployment. The user of your library does that..
Simple hack
This can be achieved by a simple hack.
You can exclude SomeLibraryB and SomeLibraryC inside your direct dependency tag.
Below is the dependency tag for your SomeProjectA.
<dependency>
<groupId>org.direct.dependency</groupId>
<artifactId>SomeProjectA</artifactId>
<exclusions>
<exclusion>
<groupId>org.some</groupId>
<artifactId>SomeLibraryB</artifactId>
</exclusion>
<exclusion>
<groupId>org.some</groupId>
<artifactId>SomeLibraryC</artifactId>
</exclusion>
</exclusions>
</dependency>
But if you have this configuration, your tests and compilation other validations will starts to fail. Therefore, you can put direct dependencies to those libraries with <scope>test</scope>.
<dependency>
<groupId>org.some</groupId>
<artifactId>SomeLibraryB</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.some</groupId>
<artifactId>SomeLibraryC</artifactId>
<scope>test</scope>
</dependency>
As you can see from the Maven documentation, the provided scope doesn't influence compilation, but runtime. In general you should only need to specify the provided scope in the dependencies for a packaging project, such as a project of type war. For this reason it doesn't usually matter much that it isn't transitive.
In other words, if you add a dependency to a jar project without explicitly specifying its scope, that dependency will be made available during compilation and so will that dependency's dependencies. If you then explicitly declare that dependency to have provided scope, this does not change.
I want to exclude all transitive dependencies from one dependency. In some places I've seen it suggested to use a wildcard for that
<dependency>
<groupId>myParentPackage</groupId>
<artifactId>myParentProject</artifactId>
<version>1.00.000</version>
<exclusions>
<exclusion>
<groupId>*</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>
When I do that I get a warning:
'dependencies.dependency.exclusions.exclusion.groupId' for myParentPackage:myParentProject:jar with value '*' does not match a valid id pattern. # line 146, column 30
The declaration itself is successful though: The transitive dependencies really are ignored in my build.
I've also found a old feature request that does request exactly this feature
So now I don't know if this is a deprecated feature that I shouldn't use, if the warning's wrong, or of the feature hasn't been completely implemented yet (I'm using Maven 3.0.4) ...Does anybody know more about this?
This is a supported feature in Maven 3.2.1 - see 'Transitive dependency excludes' section in the release notes.
I hate getting Maven warnings myself. I've seen the wildcard approach but have avoided it. Run a mvn dependency:tree goal, discover the top-level dependencies belonging to the artefact in question and exclude each one individually (hopefully the list isn't so vast). This is by far the safest way to approach this problem.
As to my knowing, this feature does not yet exist. In the feature request you sent, you can see it's status is still "Unresolved".