I'm working on a maven build pipeline.
Currently I'm facing the problem that a maven project is still building if a dependency is invalid.
I think everyone know that warning in the log:
[WARNING] The POM for is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details.
I would like to fail the project instead of a warning because in a big build pipeline its hard to find.
I looked into the code:
The warning happens in maven-core because of an EventType.ARTIFACT_DESCRIPTOR_INVALID.
In the DefaultArtifactDescriptorReader I found that during building the effective model the ModelBuildingException is catched.
There is a ArtifactDescriptorPolicy. Based on that a exception will be added or only the EventType.ARTIFACT_DESCRIPTOR_INVALID is fired (see invalidDescriptor()).
model = modelBuilder.build( modelRequest ).getEffectiveModel();
}
catch ( ModelBuildingException e )
{
for ( ModelProblem problem : e.getProblems() )
{
if ( problem.getException() instanceof UnresolvableModelException )
{
result.addException( problem.getException() );
throw new ArtifactDescriptorException( result );
}
}
invalidDescriptor( session, trace, a, e );
if ( ( getPolicy( session, a, request ) & ArtifactDescriptorPolicy.IGNORE_INVALID ) != 0 )
{
return null;
}
result.addException( e );
throw new ArtifactDescriptorException( result );
}
I didn't found any option to configure the ArtifactDescriptorPolicy.
I expect that the ArtifactDescriptorPolicy.STRICT would solve my problem.
Does anyone knows more about that problem?
I think you're a bit ahead of the curve. There is a feature (new switch) in Maven 4 called --fail-on-severity or -fos that does exactly this: fail the build on a specific severity.
$ mvn -fos WARN clean install
If you don't mind being on the bleeding edge, you could install it and give it a go.
Related
I am not sure of the behaviour of error messages in Substrate runtimes in relation to Substrate UI, and if they inherently cause a transaction failure or not.
For example in the democracy SRML I see the following line:
ensure!(!<Cancellations<T>>::exists(h), "cannot cancel the same proposal twice");
Which presumably is a macro that ensures that the transaction fails or stops processing if the h (the proposal hash) already exists. There is clearly a message associated with this error.
Am I right to assume that the transaction fails (without the rest of the SRML code being executed) when this test fails?
If so, how do I detect the failure in Substrate UI, and possibly see the message itself?
If not, then presumably some further code is necessary in the runtime module which explicitly creates an error. I have seen Err() - but not in conjunction with ensure!()
As https://github.com/paritytech/substrate/pull/3433 is merged, the ExtrinsicFailed event now includes a DispatchError, which will provide additional error code.
There isn't much documentations available so I will just use system module as example.
First you need to decl_error, note the error variants can only be simple C like enum
https://github.com/paritytech/substrate/blob/5420de3face1349a97eb954ae71c5b0b940c31de/srml/system/src/lib.rs#L334
decl_error! {
/// Error for the System module
pub enum Error {
BadSignature,
BlockFull,
RequireSignedOrigin,
RequireRootOrigin,
RequireNoOrigin,
}
}
Then you need to associate the declared Error type
https://github.com/paritytech/substrate/blob/5420de3face1349a97eb954ae71c5b0b940c31de/srml/system/src/lib.rs#L253
decl_module! {
pub struct Module<T: Trait> for enum Call where origin: T::Origin {
type Error = Error;
Then you can just return your Error in dispatch calls when things failed
https://github.com/paritytech/substrate/blob/5420de3face1349a97eb954ae71c5b0b940c31de/srml/system/src/lib.rs#L543
pub fn ensure_root<OuterOrigin, AccountId>(o: OuterOrigin) -> Result<(), Error>
where OuterOrigin: Into<Result<RawOrigin<AccountId>, OuterOrigin>>
{
match o.into() {
Ok(RawOrigin::Root) => Ok(()),
_ => Err(Error::RequireRootOrigin),
}
}
Right now you will only able to see a two numbers, module index and error code from JS side. Later there could be support to include the error details in metadata so that frontend will be able to provide a better response.
Related issue:
https://github.com/paritytech/substrate/issues/2954
The ensure! macro is expaneded as:
#[macro_export]
macro_rules! fail {
( $y:expr ) => {{
return Err($y);
}}
}
#[macro_export]
macro_rules! ensure {
( $x:expr, $y:expr ) => {{
if !$x {
$crate::fail!($y);
}
}}
}
So basically, it's just a quicker way to return Err. At 1.0, the error msg will only be printed to stdout(at least what I've tested so far), doesn't know if it'll be included in blockchain in the future(so can be viewed in substrate ui)..
The manual of the Checker Framework claims "You can write multiple #EnsuresNonNullIf annotations on a single method", however I observe the following message if I try this:
#EnsuresNonNullIf(expression="getFieldNames()", result=true)
#EnsuresNonNullIf(expression="getFieldName(i)", result=true)
public boolean hasFieldNames() {
return fFieldNames != null;
}
The resulting error message by the Eclipse Java compiler:
Duplicate annotation of non-repeatable type #EnsuresNonNullIf. Only annotation types marked #Repeatable can be used multiple times at one target.
The resulting error message by the MVN javac compiler:
[ERROR] Blabla.java:[?,?] org.checkerframework.checker.nullness.qual.EnsuresNonNullIf is not a repeatable annotation type
I'm annotating 10-year-old code, so I'm hoping some configuration trick can safe the day :-) Without the multiple #EnsuresNonNullIf I'm up for quite a bit of manual code annotation to fix false positives that I'm not interested in...
PS: I tried using both checker-framework-2.8.1 and 2.9.0 with similar results, and always using <maven.compiler.source>1.8</maven.compiler.source>
I found this issue on the Checker Framework issue tracker: https://github.com/typetools/checker-framework/issues/1307
It explains an "enhancement" request for adding #Repeatable to the following CF annotations:
> #DefaultQualifier -- DONE
> #EnsuresKeyFor
> #EnsuresKeyForIf
> #EnsuresLockHeldIf
> #EnsuresLTLengthOf
> #EnsuresLTLengthOfIf
> #EnsuresMinLenIf
> #EnsuresNonNullIf
> #EnsuresQualifier -- DONE
> #EnsuresQualifierIf -- DONE
> #FieldInvariant
> #GuardSatisfied
> #HasSubsequence
> #MethodVal
> #MinLenFieldInvariant
> #RequiresQualifier -- DONE
> #SubstringIndexFor
And the discussion contains a workaround, since EnsuresQualifiersIf is already repeatable:
#EnsuresQualifiersIf({
#EnsuresQualifierIf(result=true, qualifier=NonNull.class, expression="getFoo()"),
#EnsuresQualifierIf(result=false, qualifier=NonNull.class, expression="getBar()")
})
boolean hasFoo();
And in my case that works out to:
#EnsuresQualifiersIf({
#EnsuresQualifierIf(result=true, qualifier=NonNull.class, expression="getFieldNames()"),
#EnsuresQualifierIf(result=true, qualifier=NonNull.class, expression="getFieldName(i)")
})
public boolean hasFieldNames() {
return fFieldNames != null;
}
I figgured out that mavens build extension deployatend has some problems to deploy if some modules using the odavid.mixin.plugin.
I get following message for all modules:
[INFO] Deploying module.xy at end
After last module in the reactor build was execute it should deploy but i get the same information - "Deploying at end"
By removing the part with the mixin.plugin it deploys successfully.
My maven verison is 3.2.5 and i use the maven.deploy.plugin 2.8.2.
I think it deppends on the merged effective-pom by the mixin, which cause a broken module counter so that the if clause cant reached at runtime (attached maven snipped).
Can someone confirm that problem and is there any solution before i try to bugfix that problem?
boolean projectsReady = READYPROJECTSCOUNTER.incrementAndGet() == reactorProjects.size();
if ( projectsReady )
{
synchronized ( DEPLOYREQUESTS )
{
while ( !DEPLOYREQUESTS.isEmpty() )
{
ArtifactRepository repo = getDeploymentRepository( DEPLOYREQUESTS.get( 0 ) );
deployProject( getSession().getProjectBuildingRequest(), DEPLOYREQUESTS.remove( 0 ), repo );
}
}
}
else if ( addedDeployRequest )
{
getLog().info( "Deploying " + project.getGroupId() + ":" + project.getArtifactId() + ":"
+ project.getVersion() + " at end" );
}
I'm going to put a fat bounty on this as soon as the system permits.
What I'm specifically having trouble with is getting coverage and getting integration tests working. For these I see the uninformative error:
Resource not found: com.something.somethingelse.SomeITCase
Uninformative because there's nothing to relate it to, nor does it mean much to someone who has none of the context.
Here's other weirdness I'm seeing. In cases where there's no integration tests for a subproject, I see this:
JaCoCoItSensor: JaCoCo IT report not found: /dev/build/dilithium/target/jacoco-it.exec
Why would I see target? This isn't a Maven project. A global search shows there's no target directory mentioned anywhere in the code base.
Then, there's this section of the documentation:
sonarqube {
properties {
properties["sonar.sources"] += sourceSets.custom.allSource.srcDirs
properties["sonar.tests"] += sourceSets.integTest.allSource.srcDirs
}
}
Near as I can tell sourceSets.integTest.allSource.srcDirs returns Files, not Strings. Also it should be:
sonarqube {
properties {
property "sonar.tests", "comma,separated,file,paths"
}
Note that you get an error if you have a directory in there that doesn't exist. Of course there's apparently no standard for what directory to put integration tests in and for some sub-projects they may not even exist. The Gradle standard would be to simply ignore non-existent directories. Your code ends up looking like:
sonarqube {
StringBuilder builder = new StringBuilder()
sourceSets.integrationTest.allSource.srcDirs.each { File dir ->
if ( dir.exists() ) {
builder.append(dir.getAbsolutePath())
builder.append(",")
}
}
if (builder.size() > 1) builder.deleteCharAt(builder.size() -1 )
if (builder.size() > 1 )
properties["sonar.tests"] += builder.toString()
properties["sonar.jacoco.reportPath"] +=
"$project.buildDir/jacoco/test.exec,$project.buildDir/jacoco/integrationTest.exec"
}
Sonar is reporting no coverage at all. If I search for the *.exec files, I see what I would expect. That being:
./build/jacoco/test.exec
./build/jacoco/integrationTest.exec
...but weirdly, I also see this:
./build/sonar/com.proj_name_component_management-component_proj-recordstate/jacoco-overall.exec
What is that? Why is it in such a non-standard location?
OK, I've added this code:
properties {
println "Before: " + properties.get("sonar.tests")
println "Before: " + properties.get("sonar.jacoco.reportPath")
property "sonar.tests", builder.toString()
property "sonar.jacoco.reportPath", "$project.buildDir/jacoco/test.exec,$project.buildDir/jacoco/integrationTest.exec"
println "After: " + properties.get("sonar.tests")
println "After: " + properties.get("sonar.jacoco.reportPath")
}
...which results in:
[still running]
I don't want any bounty or any points.
Just a suggestion.
Can you get ANY Jacoco reports at all?
Personally I would separate the 2: namely Jacoco report generation and Sonar.
I would first try to simply generate Jacoco THEN I would look at why Sonar can not get a hold of them.
I have a gradle build script with a handful of source sets that all have various dependencies defined (some common, some not), and I'm trying to use the Eclipse plugin to let Gradle generate .project and .classpath files for Eclipse, but I can't figure out how to get all the dependency entries into .classpath; for some reason, quite few of the external dependencies are actually added to .classpath, and as a result the Eclipse build fails with 1400 errors (building with gradle works fine).
I've defined my source sets like so:
sourceSets {
setOne
setTwo {
compileClasspath += setOne.runtimeClasspath
}
test {
compileClasspath += setOne.runtimeClasspath
compileClasspath += setTwo.runtimeClasspath
}
}
dependencies {
setOne 'external:dependency:1.0'
setTwo 'other:dependency:2.0'
}
Since I'm not using the main source-set, I thought this might have something to do with it, so I added
sourceSets.each { ss ->
sourceSets.main {
compileClasspath += ss.runtimeClasspath
}
}
but that didn't help.
I haven't been able to figure out any common properties of the libraries that are included, or of those that are not, but I can't find anything that I'm sure of (although of course there has to be something). I have a feeling that all included libraries are dependencies of the test source-set, either directly or indirectly, but I haven't been able to verify that more than noting that all of test's dependencies are there.
How do I ensure that the dependencies of all source-sets are put in .classpath?
This was solved in a way that was closely related to a similar question I asked yesterday:
// Create a list of all the configuration names for my source sets
def ssConfigNames = sourceSets.findAll { ss -> ss.name != "main" }.collect { ss -> "${ss.name}Compile".toString() }
// Find configurations matching those of my source sets
configurations.findAll { conf -> "${conf.name}".toString() in ssConfigNames }.each { conf ->
// Add matching configurations to Eclipse classpath
eclipse.classpath {
plusConfigurations += conf
}
}
Update:
I also asked the same question in the Gradle forums, and got an even better solution:
eclipseClasspath.plusConfigurations = configurations.findAll { it.name.endsWith("Runtime") }
It is not as precise, in that it adds other stuff than just the things from my source sets, but it guarantees that it will work. And it's much easier on the eyes =)
I agree with Tomas Lycken, it is better to use second option, but might need small correction:
eclipse.classpath.plusConfigurations = configurations.findAll { it.name.endsWith("Runtime") }
This is what worked for me with Gradle 2.2.1:
eclipse.classpath.plusConfigurations = [configurations.compile]