Adding templated arguments to a command in bash - shell

I am setting up a new project with multiple extensions. My goal is to track the code coverage for all extensions. Extensions live in subdirectories of the directory extensions and have multiple source folders. The number of extensions in my project is not final. So I will most certainly add one or more. Consider a structure like this:
extensions
extension A
src
testsrc
web
src
testsrc
extension B
...
All extensions follow the same structure. I am using the coverage-jdk11 job as described here:
https://docs.gitlab.com/ee/user/project/merge_requests/test_coverage_visualization.html#java-and-kotlin-examples
Now instead of
python /opt/cover2cover.py target/site/jacoco/jacoco.xml
src/main/java
> target/site/cobertura.xml
I need to add multiple src directories, which is supported. So my current version looks like this:
python /opt/cover2cover.py jacoco.xml
extensions/extensionA/src
extensions/extensionA/testsrc
extensions/extensionA/web/src
extensions/extensionA/web/testsrc
> cobertura.xml
But this one obviously only supports extensionA. My idea is to iterate through the subdirectories of the extensions directory and create multiple arguments for each subdirectory. But I have no idea how to do this in shell.

In the end I got it working with the following command:
find extensions -type d -maxdepth 3 -regex "^extensions/.*/\(testsrc\|src\)$" | xargs -t python /opt/cover2cover.py hybris/log/junit/jacoco.xml > hybris/log/junit/cobertura.xml

Related

pandoc to make each directory a chapter

I have a lot of markdown files in various directories each with the same format (# title, then ## sub-title).
can I make the --toc respect the folder layout, in that the folder itself is the name of chapter, and each markdown file is content of this chapter.
so far pandoc totally ignores my folder names, it works the same as putting all the markdown files within the same folder.
My approach to this is to create index files in each folder with first level heading and downgrade headings in other files by one level.
I use Git and by default I'm using default structure, having first level headings in files, but when I want to generate ebook using pandoc I'm modifying files via automated Linux shell script. After that, I revert changed files via Git.
Here's the script:
find ./docs/*/ -name "*.md" ! -name "*index.md" -exec perl -pi -e "s/^(#)+\s/#$&/g" {} \;
./docs/*/ means I'm looking only for files inside subfolders of docs directory like docs/foo/file1.md, docs/bar/file2.md.
I'm also interested only in *.md files, excluding *index.md files.
In index.md files (that I name usually 00-index.md to make them appear as first), I put a first level heading # and because those files are excluded from find portion of the script, their headings aren't downgraded.
Next, there's a perl's search and replace command with regular expression s/^(#)+\s/#$&/g that looks for all lines starting from one or more # and adds another # to them.
In the end, I'm running pandoc with --toc-depth=2 so the table of content contains only first and second level headings.
pandoc ./docs/**/*.md --verbose --fail-if-warnings --toc-depth=2 --table-of-contents -o ./ebook.epub
To revert all changes made to files, I restore changes in the Git repo.
git restore .

Shell Script for searching Makefile recursively in directories

I need to build a shell script on debian based OS to recursively browse and identify which Folders have Makefile present. If present then build the package. If not present then just list those folders. The catch here as shown below is I need to browse only one folder below the parent folder (ABC) and check if makefile is present under Folder1, Folder 2...etc and not to go into the sub directories of Folder1 (not to look for Makefile under folders Folder1.1, Folder 1.2, Folder2.1 etc). Looking for some tips how to loop only one level and then exit back to folder ABC and start the search.
ABC---
|---Folder1
| |-------Makefile
|-------Folder1.1
|-------Folder1.2
|---Folder2
| |-------Somefile
|-------Folder2.1
|-------Folder2.2
|---FolderN
| |-------Makefile
|-------FolderN.1
|-------FolderN.2
As answered by Karthikraj in above comments. This helped
find . -maxdepth 2 -type f -iname 'makefile'

How to exclude files using ls?

I'm using a python script I wrote to take some standard input from ls and load the data in the files described by that path. It looks something like this:
ls -d /path/to/files/* | python read_files.py
The files have a certain name structure based on what data they have in them but are all stored in the same directory. The files I want to use have the name structure A<fileID>_###.txt (where ### is always some 3 digit number). I can accomplish getting only the files that start with A by just changing what I have above slightly to ls -d /path/to/files/A*. HOWEVER, some files have a suffix flag called B (so the file looks like A<fileID>_###B.txt) and I DO NOT want to include those.
So, my question is, is there a way to exclude those files that end in ...B.txt (or a way to only include files that end in a number)? I thought about something to the effect of:
ls -d /path/to/file/R*%d.txt
to only include files that end in a number followed by the file extension, but couldn't find any documentation on anything of the sort.
You could try this : ls A*[^B].txt
With extended globbing.
shopt -s extglob
ls R*!(B).txt

Ctags - tag for project and library files (multiple dir with absolute path)

The below dir hierarchy is from ctags faq
I could create a tag file, with absolute file path as
cd ~/project
ctags --file-scope=no -R `pwd`
how can I create one tag file which searches my project, and the std library functions.
For example, say my project is /sysint/client , and the library is at /misccomp. How can I create a tag file which searches both these dir and sub dir. (I do not want to search all under / ).
DO you thinking splitting into 2 tag file is better?
`-----misccomp
| `...
`-----sysint
`-----client
| `-----hdrs
| `-----lib
| `-----src
| `-----test
`-----common
| `-----hdrs
| `-----lib
| `-----src
| `-----test
`-----server
`-----hdrs
`-----lib
`-----src
`-----test
I think that splitting into 2 tag file is better. Why:
I think, you need to update your tags sometimes. There is faster to update smaller tag file than bigger one. So, when you edit your project, only project's tags are updated, when you edit lib - only lig's tags are updated. Instead of update all the tags every time;
While I was writing plugin Indexer I found out that several tag files works not slower that single tag file.
I also would recommend you to use this plugin Indexer, it will do all the work automatically. It provides painless automatic tags generation and keeps tags up-to-date. Tags file is generated in background, so you don't have to wait. Check it if you want.
UPD: For detailed information, see the article: Vim: convenient code navigation for your projects, which explains the usage of Indexer + Vimprj thoroughly. Among other things, tags for libraries are explained, which is exactly what you want.

Join multiple Coffeescript files into one file? (Multiple subdirectories)

I've got a bunch of .coffee files that I need to join into one file.
I have folders set up like a rails app:
/src/controller/log_controller.coffee
/src/model/log.coffee
/src/views/logs/new.coffee
Coffeescript has a command that lets you join multiple coffeescripts into one file, but it only seems to work with one directory. For example this works fine:
coffee --output app/controllers.js --join --compile src/controllers/*.coffee
But I need to be able to include a bunch of subdirectories kind of like this non-working command:
coffee --output app/all.js --join --compile src/*/*.coffee
Is there a way to do this? Is there a UNIXy way to pass in a list of all the files in the subdirectories?
I'm using terminal in OSX.
They all have to be joined in one file because otherwise each separate file gets compiled & wrapped with this:
(function() { }).call(this);
Which breaks the scope of some function calls.
From the CoffeeScript documentation:
-j, --join [FILE] : Before compiling, concatenate all scripts together in the order they were passed, and write them into the specified file. Useful for building large projects.
So, you can achieve your goal at the command line (I use bash) like this:
coffee -cj path/to/compiled/file.js file1 file2 file3 file4
where file1 - fileN are the paths to the coffeescript files you want to compile.
You could write a shell script or Rake task to combine them together first, then compile. Something like:
find . -type f -name '*.coffee' -print0 | xargs -0 cat > output.coffee
Then compile output.coffee
Adjust the paths to your needs. Also make sure that the output.coffee file is not in the same path you're searching with find or you will get into an infinite loop.
http://man.cx/find |
http://www.rubyrake.org/tutorial/index.html
Additionally you may be interested in these other posts on Stackoverflow concerning searching across directories:
How to count lines of code including sub-directories
Bash script to find a file in directory tree and append it to another file
Unix script to find all folders in the directory
I've just release an alpha release of CoffeeToaster, I think it may help you.
http://github.com/serpentem/coffee-toaster
The most easy way to use coffee command line tool.
coffee --output public --join --compile app
app is my working directory holding multiple subdirectories and public is where ~output.js file will be placed. Easy to automate this process if writing app in nodejs
This helped me (-o output directory, -j join to project.js, -cw compile and watch coffeescript directory in full depth):
coffee -o web/js -j project.js -cw coffeescript
Use cake to compile them all in one (or more) resulting .js file(s). Cakefile is used as configuration which controls in which order your coffee scripts are compiled - quite handy with bigger projects.
Cake is quite easy to install and setup, invoking cake from vim while you are editing your project is then simply
:!cake build
and you can refresh your browser and see results.
As I'm also busy to learn the best way of structuring the files and use coffeescript in combination with backbone and cake, I have created a small project on github to keep it as a reference for myself, maybe it will help you too around cake and some basic things. All compiled files are in www folder so that you can open them in your browser and all source files (except for cake configuration) are in src folder. In this example, all .coffee files are compiled and combined in one output .js file which is then included in html.
Alternatively, you could use the --bare flag, compile to JavaScript, and then perhaps wrap the JS if necessary. But this would likely create problems; for instance, if you have one file with the code
i = 0
foo = -> i++
...
foo()
then there's only one var i declaration in the resulting JavaScript, and i will be incremented. But if you moved the foo function declaration to another CoffeeScript file, then its i would live in the foo scope, and the outer i would be unaffected.
So concatenating the CoffeeScript is a wiser solution, but there's still potential for confusion there; the order in which you concatenate your code is almost certainly going to matter. I strongly recommend modularizing your code instead.

Resources