How does a gradle build know which modules have changed? - gradle

I want to write a bash script which grabs only the outputted jars for the modules within my project which have changed (after a build) so that I can copy them up to a server. I don't want to have to copy every single module jar every time, as in if you do a full clean build. It's a gradle project using git. I know that gradle can do an incremental build based on only the modules whose code has updated but is there a way this plugin (assuming it's a plugin) can be called? I have done some searching online but can't find any info.

Gradle has the notion of inputs and outputs that are associated with a task. Gradle takes snapshots of the inputs and outputs for a task the first time they run and on each subsequent execution. These snapshots contain hashes of the contents of each file. This enables gradle to check upon subsequent executions, if the inputs and/or outputs have changed and decide if the task needs to be executed again.
This feature is also available to custom gradle tasks (those that you write yourself) and is one way in which you could implement the behaviour you are looking for. You could invoke the corresponding task from a bash script, if needed. More details can be found here:
Gradle User Guide, Chapter 14.
Otherwise, I imagine your bash script might need to compare the modified timestamps of the files in question or to compute and compare hashes itself.

The venerable rsync exists to do exactly this kind of thing: find differences between an origin and a (possibly remote) destination, and synchronize them, with lots of options to choose how to detect the differences and how to transfer them.
Or you could use find to search for .jar files modified in the last N minutes ...
Or you could use inotifywait to detect filesystem changes as they happen...
I get that getting Gradle to tell you directly what has been built would be the most logical thing, but for that I'd say you have to think more Java/Groovy than Bash... and fight your way through the manual.

Related

How can I collect the output from CI?

I don't know how to collect the data from each build machine on CI. (I use TeamCity for CI and this is the first time to use CI by myself.)
After building code and running the .exe file, an output file is generated. It is a .csv file and its size is less than 1KB and very simple. I want to collect the data to one place and do some statistics.
The build and running .exe file is working fine. However, I don't know the next step. I have two ideas.
(Idea 1) Set-up a log database server (e.g. kibana-elastic search) and send the output to it. However, it seems an overkilling solution.
(Idea 2) Create a batch file and just copy the log to somewhere.
However, I don't know what is a usual way to use CI and collect the data. I guess there will be a better solution. Is there any way to collect the data by using CI?
I can suggest using build artifacts: you can configure your builds so that they will produce and make some files available for the users of Teamcity. Then you can download them and analyze as you need. Taking into account that files are pretty small, I think it's an ideal variant.
If you need to collect all artifacts from every build, you can configure another build, which would run some python script, which in turn would utilize Teamcity REST API to collect all artifacts from specific build and zip and produce complete set of your files.
As an example you can check some build at JetBrains test server: just select finished build and navigate to Artifacts tab.
Please ask more questions if my answer is not clear enough.

Using Gradle to Rsync instead of copy

I have a project that involves moving files from one directory to another repeatedly during build and debugging processes. To help with that, I ended up making a task to file copy parts of the project from one location to another.
Is there a way to get gradle to perform an rsync instead of a copy? I feel like 2 minutes to copy all of the necessary files when only making a few changes to one isn't exactly efficient.
Or is there something wrong with gradle for it to be taking that long?
Gradle doesn't ship with rsync-like functionality, but you could call rsync using an Exec task. It's also worthwhile to check whether there is a third-party plugin.

Dynamically adding gradle projects at runtime

I'm in the process of updating our build process for Android to use gradle. We have client-specific apps - i.e. a single code template which is used as the basis for all the apps which are created dynamically.
To build the apps, I loop through a CSV file to get the details for each one. I then take a copy of the source template, inserting the client's name, images, etc. before compiling the app. This works fine in the current system. In the gradle version, I've got it successfully looping through the rows and creating the app source for each one with the right details. However when I try to actually build the app, it fails with the message:
Project with path ':xxxxxx' could not be found in root project 'android-gradle'.
From reading the documentation, I understand that this is because the project doesn't exist during the configuration phase as it's not created until the execution phase. However what I haven't been able to find is a way around this. Has anyone managed to achieve something similar? Or perhaps a suggestion for a better approach?
One option is to script settings.gradle. As in any other Gradle script, you have the full power of Groovy available. Later on you can no longer change which projects the build is composed of. Note that settings.gradle will be evaluated for each and every invocation of Gradle, so evaluation needs to be fast.
While Peter's answer pointed me in the right direction, it ended up not being a workable solution. Unfortunately with nearly 200 apps to build, creating a copy of the source for each one was too big an overhead and gradle kept running out of memory.
What I have done instead is to make use of the Android plugin's product flavors functionality. It was quite straight forward dynamically adding a productFlavor for each row in the CSV (and I can do that in build.gradle rather than settings.gradle), and I just set the srcDir to point to the relevant images etc for each one.

What does UP-TO-DATE in gradle indicate?

I have recently started using gradle in a project and I am running the standard
gradle clean init build
I noticed that in many of the tasks that are running, I get this UP-TO-DATE message in the console, next to the current task that's running. eg:-
:foo:bar:test UP-TO-DATE
I am curious as to what does this message mean. Couldn't find any documentation around the same.
Everything that Gradle does is a task. Most tasks have inputs and outputs declared. Gradle will determine if a task is up to date by checking the inputs and outputs.
For example, your compile task input is the source code. If the source code hasn't changed since the last compile, then it will check the output to make sure you haven't blown away your class files generated by the compiler. If the input and output is unchanged, it considers the task "up to date" and doesn't execute that task. This can save a lot of time, especially on large builds.
BTW: In case you really want to bypass this build optimization, you can use the --rerun-tasks command-line option to enforce execution of every task.
see documentation of gradle command-line options
Gradle is an incremental build system. This means that it checks that a task must really be executed before actually executing it, in order to be faster.
So, for example, if you already compiled your source files in a previous build, and didn't modify any source file (nor any other input of the compile task), then Gradle won't recompile all the source files, because it know it will lead to exactly the same output as the one already present in the build folder. And the compilation is thus safely skipped, leading to a faster build.
More information in the documentation
I faced the similar problem tried all the options, but the following one worked for me:
gradle -Dorg.gradle.daemon=false <your tasks>

Scheme Script vs. Build Phase Script

After I make a build I want to copy some files into my Xcode project.
I discovered that I could do this either in
In "Build Phases" with a custom build step.
I can also execute scripts before and after the different "tasks" in the Scheme editor:
Build (This is where I could add my script)
Run
Test
Profile
Analyze
Archive
I don't completely understand the differences / possible implications between the two approaches and I am wondering when to choose either approach. Thanks for clarification.
After I make a build I want to copy some files into my Xcode project.
I assume you want to copy files to your build product, not the Xcode project.
There are several subtle differences between scheme and build phase scripts. Here are some of them:
Scheme scripts are part of the scheme, so sharing with other developers is more configurable. Build phase scripts on the other hand are part of the target and cannot be skipped simply by choosing another scheme.
Scheme scripts can run before the dependency checking. So you can use them to modify source files and get up to date results. This is not possible with build phase scripts.
The information passed to the script in environment variables differs slightly. Depending on what information you need you sometimes have to choose right kind of script.
Build phase scripts are run conditionally only if the build process succeeds until their place in the target.
Build phase scripts can be configured to only run when input files changed.
There isn't much difference between the two, however you have more control where, in the build sequence, the build phases scripts are run and so this is preferable (for example you could modify files that have already been copied by standard Xcode build phases).
I always use Build Phases scripts myself and have never used Scheme scripts. They are more visible and more manageable.

Resources