Regex Directory matching in Groovy - gradle

As part of Gradle delete task, I would like to delete all tar files beginning with CSW and ending with .tar.gz from a directory. How can I achieve it with Groovy?
I tried something like below:
delete (file('delivery/').eachDirMatch(~/^CSW$.*.gz/))
But it doesn't work. How do I employ the regex in Groovy. As compared to shell it is something like rm -rf CSW*.tar.gz

I will answer with a gradle solution as you refer to gradle in your question. Something along the lines of:
myTask(type: Delete) {
delete fileTree(dir: 'delivery' , include: '**/CSW*.tar.gz')
}
where the delete method call is configuration time and configures what will be deleted when the task eventually runs. For details it might be worth looking through the gradle docs on the fileTree method.
If you need to stay pure groovy you could do something along the lines of:
new AntBuilder().fileScanner {
fileset(dir: 'delivery', includes: '**/CSW*.tar.gz')
}.each { File f ->
f.delete()
}
If this code lives in a gradle script I would recommend sticking with option one as it retains the gradle up-to-date checking and fits well within the gradle configuration vs execution time pattern.
If you really want to go regex as opposed to the above style patterns you can certainly do that, though I would personally not see much of a point given your requirements.

Related

Get list of files containing string(s) or pattern(s)

Is there a Gradle pattern for retrieving the list of files in a folder or set of folders that contain a given string, set of strings, or pattern?
My project produces RPMs and is using the Nebula RPM type (great package!). There are a couple of different kinds of sets of files that need post-processing. I am trying to generate the list of files that contain the strings that are the indicators for post-processing. For example, files that contain "#doc" need to be processed by the doc generator script. Files that contain "#HOSTNAME#" and "#HOSTFQDN#" need to be processed by sed to replace the strings with the actual host name or host fqdn.
The search root in the package will be src\main\resources. With the result the build script sets up the post-install script commands - something like:
postInstall('/opt/product/bin/postprocess.sh ' + join(filesContainingDocs, " "))
postInstall('/bin/sed -i -e "s/#HOSTNAME#/$(hostname -s)/" -e s/#HOSTFQDN#/$(hostname)/" ' + join(filesContainingHostname, " ")
I can figure out the postinstall syntax. I'm having difficulty finding the filter for any of the regular Gradle 'things' (i.e., FileTree) that operate on contents of files rather than names of files. How would I populate filesContainingDocs and filesContainingHostname - something along the lines of:
filesContainingDocs = FileTree('src/main/resources', { contents.matches('#doc') }
filesContainingHostname = FileTree('src/main/resources', { contents.matches('#(HOSTNAME|HOSTFQDN)#') }
While the post-process script could simply do the grep, the several RPMs in our product overlay each other and each RPM should only post-process the files it provides, so a general grep over the final installed folder is not workable - it would catch files provided by other RPMs. It seems to me that I ought to be able to, at build time, produce the correct static list of files from the bigger set of source files that comprise the given RPM's project.
It doesn't have to be FileTree - running a command like findstr /s /m /c:"#doc" src\main\resources\*.conf (alas, the build platform is Windows) produces the answer in stdout but I'm not sure how to get that result into an object Gradle can use to expand the result. (I also suspect there is a 'more Gradle way' to do this.)
The set of files, and the contents of those files, is generally fairly small.
I'm having difficulty finding the filter for any of the regular Gradle 'things' (i.e., FileTree) that operate on contents of files rather than names of files.
You can apply any filter you can imagine on a Gradle file tree, in the end it is just Groovy (or Kotlin) code running in the JVM. Each Gradle FileTree is nothing more than a (lazily evaluated) collection of Java File objects. To filter those File objects, you can read their content, e.g. in the same way you would read them in Java. Groovy even provides a JDK enhancement for the Java class File that includes the simple method getText() for this purpose. Now you can easily filter for files that contain a certain string:
filesContainingDocs = fileTree('src/main/resources').filter { file ->
file.text.contains('#doc')
}
Using Groovy, you can call getters like .getText() in the same way as accessing fields (.text in this case).
If a simple contains check is not enough, the Groovy JDK enhancements even provide the method matches(Pattern pattern) on CharSequence/string instances to perform a regular extension check:
filesContainingDocs = fileTree('src/main/resources').filter { file ->
file.text.replace('\r\n','\n').matches('.*some regex.*') }
}

Delete directory with all files in it using gradle task

In my project's root I have a directory as follows:
build/exploded-project/WEB-INF/classes
I want to delete all the files in the classes directory using a gradle task. I tried the of the following combinations but none of them worked:
task deleteBuild(type: Delete) {
project.delete 'build/exploded-project/WEB-INF/classes/'
}
task deleteBuild(type: Delete) {
delete 'build/exploded-project/WEB-INF/classes/'
}
task deleteBuild(type: Delete) {
delete '$rootProject.projectDir/build/exploded-project/WEB-INF/classes/'
}
task deleteBuild(type: Delete) {
delete fileTree('build/exploded-project/WEB-INF/classes').matching {
include '**/*.class'
}
}
Your second variant is correct and works fine here.
Though I'd recommend not hardcoding the path.
Use $buildDir instead of build, or if the path is the output path of another task use the respective property of that task.
If it doesn't work for you, run with -i or -d to get more information about what is going on and possibly going wrong.
As suggested a better approach is to use Gradle Variables.
Try:
task deleteBuild(type: Delete) {
delete "$buildDir/exploded-project/WEB-INF/classes/"
}
Pay attention to replace the single quote with the double one.
Regards,
S.

Build translated Sphinx docs in separate directories

I work on a documentation that will be published in different languages. It is one of the reasons I use Sphinx.
I know how generate the translated version but with the setting described in the documentation, the resulting files replaces the ones that were generated before. Thus, when generating multiple translation, I have to move the files to another directory before doing anything else. It would be more practical (and easier to deploy) to generate the translations in separate directories.
Is there a way to tell Sphinx or the makefile that when I run
make -e SPHINXOPTS="-D language='(lang)'" (format)
the files have to be generated in /build/(format)/(lang) ?
For now, only the HTML build is used (and I doubt that something else will be used) so a specific solution would be accepted if it is not possible to do it globally.
Sphinx version is 1.4.6.
I found a working solution by replacing the Makefile by a custom Python script (build.py).
Using sys.argv, I emulate the make target behaviour. I added several options for the language. Using the subprocess module, precisely its call() function, I am able to run commands with a set of options. The script is based on a function that generates the command to be executed by subprocess.call():
def build_command(target, build_dir, lang=None):
lang_opt = []
if lang:
lang_opt = ["-D", "language='" + lang + "'"]
build_dir += "/" + lang
else:
build_dir += "/default"
return ["sphinx-build", "-b", target, "-aE"] + lang_opt + ["source", "build/" + build_dir]
It is the lang parameter that allows me to separate each language, independently of the target. Later in the code, I just run
subprocess.call(build_command(target, target, lang))
To build the documentation in the desired language with the specified target (usually, target = "html"). It can also emulate make gettext:
subprocess.call(build_command("gettext", "locale"))
And so on...
A better solution may exist, but at least this one will do the job.

How to remove an element from gradle task outputs?

is it possible to exclude an element from the output files of a Task in order to not consider it for the up-to-date check? In my case I have a copy task that automatically set the destination directory in outputs variable, but I'd like to remove it and set only some of the copied files.
Or, as alternative, is it possible to overwrite the entire outputs variable?
Thanks,
Michele.
Incremental tasks create snapshots from input and output files of a task. If these snapshots are the same for two task executions (based on the hash code of file content), then Gradle assumes that task is up-to-date.
You are not able to remove some files from output and expect Gradle to forget about them, simply because the hash codes will be different.
There is an option that allows you to manually define the logic of up-to-date checks.
You should use a method upToDateWhen(Closure upToDateClosure) in TaskOutputs class.
task myTask {
outputs.dir files('/home/user/test')
outputs.upToDateWhen {
// your logic here
return true; // always up-to-date
}
}
I've found the solution:
task reduceZip(type: Copy) {
outputs.files.setFrom(file("C:/temp/unzip/test.properties"))
outputs.file(file("C:/temp/unzip/test.txt"))
from zipTree("C:/temp/temp.zip")
into "C:/temp/unzip"
}
Outputs.files list could be modified only register new elements, not removing (for what I know). So I need to reset the list and then eventually add other files. The outputs.files.setFrom method reset the outputs.files list so it is possible add other file. In the example above I reduce the up-to-date check only to the test.properties and test.txt files.

Join multiple Coffeescript files into one file? (Multiple subdirectories)

I've got a bunch of .coffee files that I need to join into one file.
I have folders set up like a rails app:
/src/controller/log_controller.coffee
/src/model/log.coffee
/src/views/logs/new.coffee
Coffeescript has a command that lets you join multiple coffeescripts into one file, but it only seems to work with one directory. For example this works fine:
coffee --output app/controllers.js --join --compile src/controllers/*.coffee
But I need to be able to include a bunch of subdirectories kind of like this non-working command:
coffee --output app/all.js --join --compile src/*/*.coffee
Is there a way to do this? Is there a UNIXy way to pass in a list of all the files in the subdirectories?
I'm using terminal in OSX.
They all have to be joined in one file because otherwise each separate file gets compiled & wrapped with this:
(function() { }).call(this);
Which breaks the scope of some function calls.
From the CoffeeScript documentation:
-j, --join [FILE] : Before compiling, concatenate all scripts together in the order they were passed, and write them into the specified file. Useful for building large projects.
So, you can achieve your goal at the command line (I use bash) like this:
coffee -cj path/to/compiled/file.js file1 file2 file3 file4
where file1 - fileN are the paths to the coffeescript files you want to compile.
You could write a shell script or Rake task to combine them together first, then compile. Something like:
find . -type f -name '*.coffee' -print0 | xargs -0 cat > output.coffee
Then compile output.coffee
Adjust the paths to your needs. Also make sure that the output.coffee file is not in the same path you're searching with find or you will get into an infinite loop.
http://man.cx/find |
http://www.rubyrake.org/tutorial/index.html
Additionally you may be interested in these other posts on Stackoverflow concerning searching across directories:
How to count lines of code including sub-directories
Bash script to find a file in directory tree and append it to another file
Unix script to find all folders in the directory
I've just release an alpha release of CoffeeToaster, I think it may help you.
http://github.com/serpentem/coffee-toaster
The most easy way to use coffee command line tool.
coffee --output public --join --compile app
app is my working directory holding multiple subdirectories and public is where ~output.js file will be placed. Easy to automate this process if writing app in nodejs
This helped me (-o output directory, -j join to project.js, -cw compile and watch coffeescript directory in full depth):
coffee -o web/js -j project.js -cw coffeescript
Use cake to compile them all in one (or more) resulting .js file(s). Cakefile is used as configuration which controls in which order your coffee scripts are compiled - quite handy with bigger projects.
Cake is quite easy to install and setup, invoking cake from vim while you are editing your project is then simply
:!cake build
and you can refresh your browser and see results.
As I'm also busy to learn the best way of structuring the files and use coffeescript in combination with backbone and cake, I have created a small project on github to keep it as a reference for myself, maybe it will help you too around cake and some basic things. All compiled files are in www folder so that you can open them in your browser and all source files (except for cake configuration) are in src folder. In this example, all .coffee files are compiled and combined in one output .js file which is then included in html.
Alternatively, you could use the --bare flag, compile to JavaScript, and then perhaps wrap the JS if necessary. But this would likely create problems; for instance, if you have one file with the code
i = 0
foo = -> i++
...
foo()
then there's only one var i declaration in the resulting JavaScript, and i will be incremented. But if you moved the foo function declaration to another CoffeeScript file, then its i would live in the foo scope, and the outer i would be unaffected.
So concatenating the CoffeeScript is a wiser solution, but there's still potential for confusion there; the order in which you concatenate your code is almost certainly going to matter. I strongly recommend modularizing your code instead.

Resources