Does the order of include statement in settings.gradle matter? - gradle

Any difference between this?
Settings.gradle
Case A:
include ':con', ':conlib', 'aaa:test'
vs Case B:
include 'aaa:test'
include ':con'
include ':conlib',
No difference?
or Is it effects to build order or something?

no, order (probably) does not matter. to be clear, this is an educated guess, after following the docs link I didn't learn much, but there are three reasons this has to be true:
1) changing the order of include() calls does not invalidate build caches. this is a good hint that the Gradle team does not see the order of include statements as important.
2) the docs do not stipulate a required or meaningful order. at least that i can tell from the available links.
3) buildSrc is compiled after settings.gradle[.kts]. anytime after Gradle 6 (see here), buildSrc is compiled after settings.gradle.kts, and buildSrc is made available in your build. It stands to reason, then, that the order of include() calls cannot matter, because if they did, it would transitively require a rebuild of your buildSrc, which would invalidate your entire build.
again, just educated guesses based on observations, i hope this helps.

The impact of ordering of include {submodule_name} can be found out in the Gradle doc.
But it should be also noted that it has to be {submodule_name} not {submodule_name}:{gardle_task} (like aaa:test).

Related

Why are gradle.properties being applied to importing project?

I have two projects (pA, pB).
In pA I have:
Some common Gradle build script files (like foo.gradle, bar.gradle).
gradle.properties defining propertyA=a, propertyB=b.
In pB I am applying foo.gradle and bar.gradle like this:
buildscript {
apply from: '/path/to/foo.gradle', to: buildscript
...
}
apply from: '/path/to/bar.gradle'
In pB I have a gradle.properties where I have propertyA=blah, propertyB=moreBlah.
I don't understand why I'm getting the propties in projectB as a and b.
I don't have these properties defined anywhere else.
Any ideas why this is happening?
Think that this is likely coming from not defining these in the root-project, but in modules A/B. gradle.properties usually resides in the root project - and is common to all the modules. And so if it only will get parsed once, this might explain the behavior; even if it may appear illogical, the order of execution (and parsing that file) might cause that. When flipping to order of modules, it should apply it the other way around. Better move it upwards by one level and find a better way to accomplish the task.

How to break down the size of scala.js JS output

There exist a few webpack bundle analysis scripts that show a list of included modules along with their sizes. However, Scala.js emits one big module for all the Scala code, so those tools can't look into it.
As both a library author and an end user of other Scala.js libraries I want to know how much various Scala packages / classes contribute to my bundle size.
I understand that Scala.js optimization makes it impossible to say that a given library weights exactly X Kb, so I'm interested in a solution that looks at a specific bundle.
Currently the best I can do is search for "com.package.name in the generated JS file, and judge the density of result indicators on the scroll bar, but that's obviously tremendously suboptimal.
So my question is, are there any tools or even half baked scripts that could improve on what I'm doing?
Finally found a good solution – with the right sbt config source-map-explorer npm package does exactly what I need based on sourcemaps (of course!). It shows the direct impact of individual Scala classes on bundle size.
Source: Scala.js issue #3556
This might not be precise as you probably want,
but to help measure the approximated size of each js file, you could inspect the binaries(
with .sjsir extensions after your run a task like fastOptJS) in target/scala-2.1.2/classes generated by scalac compiler and compile each of them to javascript.
To compile a binary to javascript you can use the code provided in this answer.

Maven dependency scopes - Revisited

please does anybody know the answer to the following, rather obvious, question regarding the "matrix of scopes" in the official maven docs:
https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependency_Scope
The footnote explains why the cell indexed [compile,compile] again contains the value "compile".
In my opinion, the VERY SAME ARGUMENT implies the following:
Cells [compile,provided] and [provided,provided] both should contain "provided".
Cell [test,provided] should contain "test".
So why do all of these cells contain "-" ?!?
It doesn't make sense to me...
Many thanks in advance for all kind of useful suggestions!
provided means that the depedency is provided by the container, which implies that the usage of this scope depends on the container at hand.
Therefore, this scope should be set at the level of the "deployable unit" (war, ear, standalone jar), not somewhere deep down in the transitive dependency tree.
Hence it is not useful to have transitivity for provided dependencies.
Instead you can overwrite the scope with dependencyManagement at the highest level to ensure that you mark those dependencies as provided that are provided by your container.

Maven - How to perform conditional execution

Is there a way to perform conditional execution of snippet in pom.xml?
my requirement is to copy a file/directory to deploy structure based on variable defined in pom.xml...
eg:
< if >
< equals arg1="package" arg2="package"/>
< then>
....
< /then>
< /if>
Not sure how can I achieve this!
Any pointers would be highly appreciated.
Thanks,
SR
Probably you'll need to use Maven AntRun Plugin for that.
In general, there's no conditional expressions in POM. The only thing similar somehow to this are build profiles that can be activated on some specified conditions, however they probably don't fit into your current case.
And, at the end, my suggestion here. We don't know exactly what your case is about and don't even have any real code snippet, however from my experience it's really unusual to have to use such hackin' stuff in Maven. For me it smells like some problems with Maven understanding, project structure or stuff like that. I may be wrong and maybe your case really needs that, but consider other options to fit into Maven default approach and conventions instead.
Well, you can use Maven's profiles, to do that.
Or you can take a look at Maven's Ant Tasks.

llvm: is it possible to merge validation and compilation in a single stage?

Generally speaking, when writing a llvm frontend, one will take an AST and first check that its semantics is well-defined. After this, one will take the AST and perform the IR build phase.
I was wondering, how realistic is to perform directly the IR build phase onto the AST, and if errors are found during the build process, revert any partial changes to the module object?
I assume something like this would be required:
remove defined Types
remove defined Globals
anything else i'm missing?
Any ideas about this? what are the general guidelines of what needs to done for a clean revert of module changes after a failed build phase?
Now, this is thinking in terms of optimistically compiling, and failing gracefully it somethings goes wrong. It might very well be that this is completely impossible or discouraged under the current LLVM model. A clear and well-documented answer in this regard is also completely acceptable
Edit In the end, I just want a reasonable way to add functions incrementally but revert gracefully to previous state of module and/or LLVMContext if a function build fails. Whatever is the preferred approach for that will be entirely satisfactory.
thanks!
Many compilers (not necessarily LLVM-related) mix semantic analysis with code generation, so it can definitely be done. However, I'm puzzled by your reference to "revert any partial changes to the module object". When you start building an IR module and encounter a semantic error in the AST, what is your plan? Do you want to spit an incomplete module? Why? Thinking about the way any regular compiler works, if there are semantic errors in the code (i.e. reference to an undefined variable), no output is created. Would you like something different?

Resources