Adding new lang to ctags does not work - ctags

I am trying to add .volt extension to ctags language map, but it keep ignoring .volt file. This is content of my .ctags file:
--recurse=yes
--tag-relative=yes
--exclude=*.git*
--exclude=.DS_Store
--langmap=html:+.volt
When I do ctags --list-maps I will see .volt files being included in HTML:
HTML *.htm *.html *.volt
But still when I run ctags, it completely ignores .volt files. What I am doing wrong here?

The reasons for the unexpected behavior are most likely:
You are not using currently latest version 5.8 of Exuberant Ctags, but a version before 5.6.
Your .ctags file has --langmap=html:+.volt at end of file with no line termination.
Read the full story below on why I think those 2 reasons result in the unexpected behavior of Ctags on your computer.
I looked on your problem on Windows first using older version 5.5.4 of Exuberant Ctags installed with text editor UltraEdit and later also with version 5.8 downloaded directly from Exuberant Ctags project page.
I created a copy of one of my HTML projects with just 1 *.html file in parent directory of the test project, 3 *.html files in a subdirectory and two more *.html files also in the subdirectory with file extension changed from html to volt on both files which were just copies of 2 of the 3 *.html files in this subdirectory.
Next I created in parent directory of the project a ctags.conf file and copied the few lines you posted into this file. Additionally I inserted at top a line with --verbose as this is useful on looking for problems like that.
And last I copied ctags.exe (first v5.5.4, later v5.8) also into the test project directory just for making it easier to run it from command line.
I opened a command prompt window in test project directory and executed
ctags.exe -f test.tag --options=ctags.conf
I could see on verbose output that both *.volt files were opened for processing and created test.tag contained also all the tags from the 2 *.volt files, the same tags as the 2 *.html files from which the *.volt files were copied before.
So what could be the problem?
I'm not only familiar with HTML. My main job is programming in C/C++. Therefore I know about an often made mistake in C source code files on reading in text files: a wrong handling of text files with no line termination on last line of the file.
And I know that some text editors like gedit on Linux position the caret on Ctrl+End at beginning of the line below the last line in the file even when last line of the file does not have a line termination. The caret should be in this case positioned by the text editor at end of the string on last line instead of beginning on next line beyond real end of the file. This in my point of view wrong behavior lets a user of the text editor think that the text file has a line termination also on last line of the file even if this is not true.
So I thought that you have appended --langmap=html:+.volt perhaps at end of the file without a line termination and ctags.exe does not evaluate the line in this case because of not well done text file parsing in source code. Therefore I removed the line termination in ctags.conf from last line containing now only --langmap=html:+.volt
I executed same command line as before and AHA, both *.volt files are ignored because of unknown language.
This was the time as I downloaded version 5.8 of Ctags for Windows and copied it into the test project directory replacing executable of version 5.5.4.
I executed the command line again with not modified ctags.conf. Both *.volt files were processed by Ctags and test.tag contained again the tags from both *.volt files.
Appending on last line of file ctags.conf again a line termination and executing the command line once more did not result in a different output. So this bug with ignoring last line of the options file if no line termination present at end of the file is fixed in version 5.8 of Ctags.
I searched in Change Notes of Exuberant Ctags for last and found in changes notes block for ctags-5.6 (Mon May 29 2006)
Fixed problem reading last line of list file (-L) without final newline.
This is the confirmation for what I thought and could see. And of course the problem existed not only on reading the list file, but also on reading other text files like the options file, or C and Java files as the next line in the change notes informs
Fixed infinite loop that could occur on files without final newline [C, Java].

If the ctags binary is really universal ctags you need to put/link your config file here (man ctags-universal -> FILES):
~/.ctags.d/my-config.ctags
File extension .ctags is relevant.
In my case, I needed ctags to support the arduino (.ino) file type. Add --langmap=c++:+.ino to ~/.ctags.d/local.ctags (it only symlinks to ~/.ctags really).
Check:
ctags --list-maps | grep C++
C++ *.c++ *.cc *.cp *.cpp *.cxx *.h *.h++ *.hh *.hp *.hpp *.hxx *.inl *.C *.H *.CPP *.CXX *.ino
[...]
Notice *.ino at the end of the line listing known extensions.

Related

pandoc to make each directory a chapter

I have a lot of markdown files in various directories each with the same format (# title, then ## sub-title).
can I make the --toc respect the folder layout, in that the folder itself is the name of chapter, and each markdown file is content of this chapter.
so far pandoc totally ignores my folder names, it works the same as putting all the markdown files within the same folder.
My approach to this is to create index files in each folder with first level heading and downgrade headings in other files by one level.
I use Git and by default I'm using default structure, having first level headings in files, but when I want to generate ebook using pandoc I'm modifying files via automated Linux shell script. After that, I revert changed files via Git.
Here's the script:
find ./docs/*/ -name "*.md" ! -name "*index.md" -exec perl -pi -e "s/^(#)+\s/#$&/g" {} \;
./docs/*/ means I'm looking only for files inside subfolders of docs directory like docs/foo/file1.md, docs/bar/file2.md.
I'm also interested only in *.md files, excluding *index.md files.
In index.md files (that I name usually 00-index.md to make them appear as first), I put a first level heading # and because those files are excluded from find portion of the script, their headings aren't downgraded.
Next, there's a perl's search and replace command with regular expression s/^(#)+\s/#$&/g that looks for all lines starting from one or more # and adds another # to them.
In the end, I'm running pandoc with --toc-depth=2 so the table of content contains only first and second level headings.
pandoc ./docs/**/*.md --verbose --fail-if-warnings --toc-depth=2 --table-of-contents -o ./ebook.epub
To revert all changes made to files, I restore changes in the Git repo.
git restore .

Use Makefile to convert postcript files to PDF and keep them updated?

Suppose I have 100 .ps files in a directory, called doc1.ps, doc2.ps, ... , doc100.ps. I would like to write a Makefile to do the following: (1) when I run "make", all the files matching the pattern doc*.ps should be converted to the pdf format (without deleting the original copy) using the command line program ps2pdf. (2) Any .ps files not matching the name pattern doc*.ps should be left untouched. (3) Whenever a file doc*.ps is updated, running "make" again should only update the PDF copy of this specific file, without converting all of them again. How can this be done?
P.S. I don't want to type the names of the .ps files explicitly into the Makefile, because this is tedious when there are many files. I'd like to have Makefile handle the matching of wildcard filenames.

sql loader without .dat extension

Oracle's sqlldr defaults to a .dat extension. That I want to override. I don't like to rename the file. When googled get to know few answers to use . like data='fileName.' which is not working. Share your ideas, please.
Error message is fileName.dat is not found.
Sqlloder has default extension for all input files data,log,control...
data= .dat
log= .log
control = .ctl
bad =.bad
PARFILE = .par
But you have to pass filename without apostrophe and dot
sqlloder pass/user#db control=control data=data
sqloader will add extension. control.ctl data.dat
Nevertheless i do not understand why you do not want to specify extension?
You can't, at least in Unix/Linux environments. In Windows you can use the trailing period trick, specifying either INFILE 'filename.' in the control file or DATA=filename. on the command line. WIndows file name handling allows that; you can for instance do DIR filename. at a command prompt and it will list the file with no extension (as will DIR filename). But you can't do that with *nix, from a shell prompt or anywhere else.
You said you don't want to copy or rename the file. Temporarily renaming it might be the simplest solution, but as you may have a reason not to do that even briefly you could instead create a hard or soft link to the file which does have an extension, and use that link as the target instead. You could wrap that in a shell script that takes the file name argument:
# set variable from correct positional parameter; if you pass in the control
# file name or other options, this might not be $1 so adjust as needed
# if the tmeproary file won't be int he same directory, need to be full path
filename=$1
# optionally check file exists, is readable, etc. but overkill for demo
# can also check temporary file does not already exist - stop or remove
# create soft link somewhere it won't impact any other processes
ln -s ${filename} /tmp/${filename##*/}.dat
# run SQL*Loader with soft link as target
sqlldr user/password#db control=file.ctl data=/tmp/${filename##*/}.dat
# clean up
rm -f /tmp/${filename##*/}.dat
You can then call that as:
./scriptfile.sh /path/to/filename
If you can create the link in the same directory then you only need to pass the file, but if it's somewhere else - which may be necessary depending on why renaming isn't an option, and desirable either way - then you need to pass the full path of the data file so the link works. (If the temporary file will be int he same filesystem you could use a hard link, and you wouldn't have to pass the full path then either, but it's still cleaner to do so).
As you haven't shown your current command line options you may have to adjust that to take into account anything else you currently specify there rather than in the control file, particularly which positional argument is actually the data file path.
I have the same issue. I get a monthly download of reference data used in medical application and the 485 downloaded files don't have file extensions (#2gb). Unless I can load without file extensions I have to copy the files with .dat and load from there.

How to extract a specific folder using IZARC (IZARCe)

I want to extract a specific directory form a huge zip file (>5GB) that is somewhat corrupted because of an inevitable bad maintained build system that creates the zip.
The tools such as winrar/7Zip GUI apps have no issues extracting the files, but some command line tools such as mks unzip and 7za fails to extract from the corrupted archive.
After a lot of digging around and trying out many such command line utilities I found out that IZARC successfully extracts files from the archive.
I am running the following command:
IZARCe.exe -e -d -o D:\aHugeZipFile.zip -pD:\temp #"source.txt"
The listing file source.txt contains just one entry:
source/lib/*
which is the only directory in the archive, from where the contents are to be extracted.
But, it is resulting in:
IZArc Command Line Extraction Add-On Version 1.1 (Build: 130)
Copyright(c) 2007 Ivan Zahariev, All Rights Reserved.
http://www.izarc.org contact#izarc.org
Archive File: aHugeZipFile.zip
WARNING: Nothing to do!
I have tried specifying:
/source/lib/*
source/lib/*
source/lib/
source/lib
*source/lib/*
in the listing file, all to no avail! :(
Any pointers on where the error is occurring, and how to fix the issue will be of great help. Thank you in advance!
Using relative or absolute paths for listfiles doesn't appear to work with IZArc. Try using wildcards such as ., *.doc, etc instead of paths in the listfile. Be aware that there appears to be a limitation for the folder depth that IZArc will extract to as well as a tendency to generate CRC errors when files with the same name are present in the same archive, even if they are in different directories.
I would suggest using 7-Zip command-line instead. It can recurse deeply through a file structure without error and can use relative directories and wildcards in its listfiles.
The following 7-Zip command was tested and worked perfectly.
7za x somearchive.zip -o"C:\Documents and Settings\me\desktop\temp_folder\test2" -ir#source.txt -aoa -scsWIN
the source.txt file may contain contain a combination of relative paths and/or wildcards on separate lines such as:
Output/, Folder2/, *, or *.doc.
In the command above: x (extract with full paths), -ir (include filenames, recurse subdirectories), -aoa (overide existing files without prompt), -scsWIN (set charset for list files). You may need to adjust these commands for your situation.

do not include required files into vim omnicompletion

If I try to autocomplete smth in a Ruby file, that has require 'xxx' statement, it starts to scan all files required (and files required by required files as well). and it does that every freakin time!
Is it possible to make vim autocomplete to NOT scan required files or just files in particular path (e.g. app/ only)?
One of the following should work
:set path=.,/myinclude1,/myinclude2 to set your own include path
:set complete-=i to disable use of included files in default completion
:set include= to unset the include file matching pattern
I would suggest you use the second one, so CTRL-X CTRL-I will still work correctly

Resources