Ctags - tag for project and library files (multiple dir with absolute path) - ctags

The below dir hierarchy is from ctags faq
I could create a tag file, with absolute file path as
cd ~/project
ctags --file-scope=no -R `pwd`
how can I create one tag file which searches my project, and the std library functions.
For example, say my project is /sysint/client , and the library is at /misccomp. How can I create a tag file which searches both these dir and sub dir. (I do not want to search all under / ).
DO you thinking splitting into 2 tag file is better?
`-----misccomp
| `...
`-----sysint
`-----client
| `-----hdrs
| `-----lib
| `-----src
| `-----test
`-----common
| `-----hdrs
| `-----lib
| `-----src
| `-----test
`-----server
`-----hdrs
`-----lib
`-----src
`-----test

I think that splitting into 2 tag file is better. Why:
I think, you need to update your tags sometimes. There is faster to update smaller tag file than bigger one. So, when you edit your project, only project's tags are updated, when you edit lib - only lig's tags are updated. Instead of update all the tags every time;
While I was writing plugin Indexer I found out that several tag files works not slower that single tag file.
I also would recommend you to use this plugin Indexer, it will do all the work automatically. It provides painless automatic tags generation and keeps tags up-to-date. Tags file is generated in background, so you don't have to wait. Check it if you want.
UPD: For detailed information, see the article: Vim: convenient code navigation for your projects, which explains the usage of Indexer + Vimprj thoroughly. Among other things, tags for libraries are explained, which is exactly what you want.

Related

pandoc to make each directory a chapter

I have a lot of markdown files in various directories each with the same format (# title, then ## sub-title).
can I make the --toc respect the folder layout, in that the folder itself is the name of chapter, and each markdown file is content of this chapter.
so far pandoc totally ignores my folder names, it works the same as putting all the markdown files within the same folder.
My approach to this is to create index files in each folder with first level heading and downgrade headings in other files by one level.
I use Git and by default I'm using default structure, having first level headings in files, but when I want to generate ebook using pandoc I'm modifying files via automated Linux shell script. After that, I revert changed files via Git.
Here's the script:
find ./docs/*/ -name "*.md" ! -name "*index.md" -exec perl -pi -e "s/^(#)+\s/#$&/g" {} \;
./docs/*/ means I'm looking only for files inside subfolders of docs directory like docs/foo/file1.md, docs/bar/file2.md.
I'm also interested only in *.md files, excluding *index.md files.
In index.md files (that I name usually 00-index.md to make them appear as first), I put a first level heading # and because those files are excluded from find portion of the script, their headings aren't downgraded.
Next, there's a perl's search and replace command with regular expression s/^(#)+\s/#$&/g that looks for all lines starting from one or more # and adds another # to them.
In the end, I'm running pandoc with --toc-depth=2 so the table of content contains only first and second level headings.
pandoc ./docs/**/*.md --verbose --fail-if-warnings --toc-depth=2 --table-of-contents -o ./ebook.epub
To revert all changes made to files, I restore changes in the Git repo.
git restore .

Adding templated arguments to a command in bash

I am setting up a new project with multiple extensions. My goal is to track the code coverage for all extensions. Extensions live in subdirectories of the directory extensions and have multiple source folders. The number of extensions in my project is not final. So I will most certainly add one or more. Consider a structure like this:
extensions
extension A
src
testsrc
web
src
testsrc
extension B
...
All extensions follow the same structure. I am using the coverage-jdk11 job as described here:
https://docs.gitlab.com/ee/user/project/merge_requests/test_coverage_visualization.html#java-and-kotlin-examples
Now instead of
python /opt/cover2cover.py target/site/jacoco/jacoco.xml
src/main/java
> target/site/cobertura.xml
I need to add multiple src directories, which is supported. So my current version looks like this:
python /opt/cover2cover.py jacoco.xml
extensions/extensionA/src
extensions/extensionA/testsrc
extensions/extensionA/web/src
extensions/extensionA/web/testsrc
> cobertura.xml
But this one obviously only supports extensionA. My idea is to iterate through the subdirectories of the extensions directory and create multiple arguments for each subdirectory. But I have no idea how to do this in shell.
In the end I got it working with the following command:
find extensions -type d -maxdepth 3 -regex "^extensions/.*/\(testsrc\|src\)$" | xargs -t python /opt/cover2cover.py hybris/log/junit/jacoco.xml > hybris/log/junit/cobertura.xml

vim set tags with absolute path

I work on multiple projects, each ~3-5 million lines of code. I have a single tags file at the root of each project. I also have a tools directory shared between all of them.
disk1
|
+--Proj A
|
+--Proj B
|
+--Shared
disk2
|
+--Proj C
|
+--Proj D
When using tags, I would like Vim to first search the tags file at the root of my project, and then search the tags file for Proj X, and then search the tags file in Shared
I can't get Vim to find the tags file in Shared
in my .vimrc file I have:
set tags=tags;D:/Shared
set tags=tags;,D:/Shared (thanks to romainl for catching a missing comma!)
but Vim only searches the local project tags file, not the shared one.
tags; should start at the CWD and traverse back up the tree until a tags file is found (finds the correct one at the project level).
D:/Shared is an explicit path and should find the tags file in that directory but fails to do so (I've checked, it does in fact exist).
I'm using Exuberand Ctags v5.8
set tags=tags;D:/Shared
means "look upward for a tags file from the current directory until you reach D:/Shared".
If you work in project C on disk 2 (let's call that disk E:), Vim will never visit D:/Shared because of two things:
Upward search is not recursive.
If no tags file is found at the root of the "current directory", Vim tries to find one at the root of its parent and so on until it reaches the topmost parent or the directory you specified after the semicolon. So, supposing you are editing E:\ProjectC\path\to\some\file, you can't expect Vim to find a tags file outside of that path. Vim will search for the following tags files, sequentially and, by the way, never find that hypothetic D:\Shared:
E:\ProjectC\path\to\some\tags <-- KO
E:\ProjectC\path\to\tags <-- KO
E:\ProjectC\path\tags <-- KO
E:\ProjectC\tags <-- OK!
E:\tags <-- KO
It won't find any tags file not listed above.
Windows doesn't have the equivalent of UNIX's "root" directory anyway.
When you don't specify a stop directory, upward search climbs the inverted tree of your filesystem from the current directory (or an arbitrary start directory) to the root of the filesystem.
Supposing you are still editing E:\ProjectC\path\to\some\file, upward search will ultimately look for the stop directory D:\Shared directly under every parent directory in the path to E:\ and will rather obviously never find it.
If you want Vim to find D:\Shared\tags wherever you are, you only need to add it explicitely to the tags option. Not as a stop directory but as a specific location:
set tags=tags;,D:/Shared/tags
Now, it says "look upward for a tags file from the current directory and use D:/Shared/tags".
Hmm… that was a lot of words just to explain the need for a single ,.

Join multiple Coffeescript files into one file? (Multiple subdirectories)

I've got a bunch of .coffee files that I need to join into one file.
I have folders set up like a rails app:
/src/controller/log_controller.coffee
/src/model/log.coffee
/src/views/logs/new.coffee
Coffeescript has a command that lets you join multiple coffeescripts into one file, but it only seems to work with one directory. For example this works fine:
coffee --output app/controllers.js --join --compile src/controllers/*.coffee
But I need to be able to include a bunch of subdirectories kind of like this non-working command:
coffee --output app/all.js --join --compile src/*/*.coffee
Is there a way to do this? Is there a UNIXy way to pass in a list of all the files in the subdirectories?
I'm using terminal in OSX.
They all have to be joined in one file because otherwise each separate file gets compiled & wrapped with this:
(function() { }).call(this);
Which breaks the scope of some function calls.
From the CoffeeScript documentation:
-j, --join [FILE] : Before compiling, concatenate all scripts together in the order they were passed, and write them into the specified file. Useful for building large projects.
So, you can achieve your goal at the command line (I use bash) like this:
coffee -cj path/to/compiled/file.js file1 file2 file3 file4
where file1 - fileN are the paths to the coffeescript files you want to compile.
You could write a shell script or Rake task to combine them together first, then compile. Something like:
find . -type f -name '*.coffee' -print0 | xargs -0 cat > output.coffee
Then compile output.coffee
Adjust the paths to your needs. Also make sure that the output.coffee file is not in the same path you're searching with find or you will get into an infinite loop.
http://man.cx/find |
http://www.rubyrake.org/tutorial/index.html
Additionally you may be interested in these other posts on Stackoverflow concerning searching across directories:
How to count lines of code including sub-directories
Bash script to find a file in directory tree and append it to another file
Unix script to find all folders in the directory
I've just release an alpha release of CoffeeToaster, I think it may help you.
http://github.com/serpentem/coffee-toaster
The most easy way to use coffee command line tool.
coffee --output public --join --compile app
app is my working directory holding multiple subdirectories and public is where ~output.js file will be placed. Easy to automate this process if writing app in nodejs
This helped me (-o output directory, -j join to project.js, -cw compile and watch coffeescript directory in full depth):
coffee -o web/js -j project.js -cw coffeescript
Use cake to compile them all in one (or more) resulting .js file(s). Cakefile is used as configuration which controls in which order your coffee scripts are compiled - quite handy with bigger projects.
Cake is quite easy to install and setup, invoking cake from vim while you are editing your project is then simply
:!cake build
and you can refresh your browser and see results.
As I'm also busy to learn the best way of structuring the files and use coffeescript in combination with backbone and cake, I have created a small project on github to keep it as a reference for myself, maybe it will help you too around cake and some basic things. All compiled files are in www folder so that you can open them in your browser and all source files (except for cake configuration) are in src folder. In this example, all .coffee files are compiled and combined in one output .js file which is then included in html.
Alternatively, you could use the --bare flag, compile to JavaScript, and then perhaps wrap the JS if necessary. But this would likely create problems; for instance, if you have one file with the code
i = 0
foo = -> i++
...
foo()
then there's only one var i declaration in the resulting JavaScript, and i will be incremented. But if you moved the foo function declaration to another CoffeeScript file, then its i would live in the foo scope, and the outer i would be unaffected.
So concatenating the CoffeeScript is a wiser solution, but there's still potential for confusion there; the order in which you concatenate your code is almost certainly going to matter. I strongly recommend modularizing your code instead.

Copy/publish images linked from the html files to another server and update the HTML files referencing them

I am publishing content from a Drupal CMS to static HTML pages on another domain, hosted on a second server. Building the HTML files was simple (using PHP/MySQL to write the files).
I have a list of images referenced in my HTML, all of which exist below the /userfiles/ directory.
cat *.html | grep -oE [^\'\"]+userfiles[\/.*]*/[^\'\"] | sort | uniq
Which produces a list of files
http://my.server.com/userfiles/Another%20User1.jpg
http://my.server.com/userfiles/image/image%201.jpg
...
My next step is to copy these images across to the second server and translate the tags in the html files.
I understand that sed is probably the tool I would need. E.g.:
sed 's/[^"]\+userfiles[\/image]\?\/\([^"]\+\)/\/images\/\1/g'
Should change http://my.server.com/userfiles/Another%20User1.jpg to /images/Another%20User1.jpg, but I cannot work out exactly how I would use the script. I.e. can I use it to update the files in place or do I need to juggle temporary files, etc. Then how can I ensure that the files are moved to the correct location on the second server
It's possible to use sed to change the file in-place using the -i option.
For your use it's up to you if it's easier/better to create a new file with the changes from the old, then copy to the 2nd domain using scp (or something similar). Or it may be easier to copy the file first, then modify it once it's on the remote server (less management of new filenames this way).

Resources