How to prevent yq removing comments and empty lines? - comments

Here Edit yaml objects in array with yq. Speed up Terminalizer's terminal cast (record) I asked about how to edit yaml with yq. I received the best answer. But by default yq removes comments and empty lines. How to prevent this behavior?
input.yml
# Specify a command to be executed
# like `/bin/bash -l`, `ls`, or any other commands
# the default is bash for Linux
# or powershell.exe for Windows
command: fish -l
# Specify the current working directory path
# the default is the current working directory path
cwd: null
# Export additional ENV variables
env:
recording: true
# Explicitly set the number of columns
# or use `auto` to take the current
# number of columns of your shell
cols: 110
execute
yq -y . input.yml
result
command: fish -l
cwd: null
env:
recording: true
cols: 110

In some limited cases you could use diff/patch along with yq.
For example if input.yml contains your input text, the commands
$ yq -y . input.yml > input.yml.1
$ yq -y .env.recording=false input.yml > input.yml.2
$ diff input.yml.1 input.yml.2 > input.yml.diff
$ patch -o input.yml.new input.yml < input.yml.diff
creates a file input.yml.new with comments preserved but
recording changed to false:
# Specify a command to be executed
# like `/bin/bash -l`, `ls`, or any other commands
# the default is bash for Linux
# or powershell.exe for Windows
command: fish -l
# Specify the current working directory path
# the default is the current working directory path
cwd: null
# Export additional ENV variables
env:
recording: false
# Explicitly set the number of columns
# or use `auto` to take the current
# number of columns of your shell
cols: 110

This is improvement of How to prevent yq removing comments and empty lines? comment.
In mine case was no enough diff -B and diff -wB as it still does not keep blank lines and keep generate an entire file difference as a single chunk instead of many small chunks.
Here is example of the input (test.yml):
# This file is automatically generated
#
content-index:
timestamp: 1970-01-01T00:00:00Z
entries:
- dirs:
- dir: dir-1/dir-2
files:
- file: file-1.dat
md5-hash:
timestamp: 1970-01-01T00:00:00Z
- file: file-2.dat
md5-hash:
timestamp:
- file: file-3.dat
md5-hash:
timestamp:
- dir: dir-1/dir-2/dir-3
files:
- file: file-1.dat
md5-hash:
timestamp:
- file: file-2.dat
md5-hash:
timestamp:
If try to edit a field and generate the difference file:
diff -B test.yml <(yq -y ".\"content-index\".timestamp=\"2022-01-01T00:00:00Z\"" test.yml)
It does keep remove blank lines:
5,7c2
<
< timestamp: 1970-01-01T00:00:00Z
<
---
> timestamp: '2022-01-01T00:00:00Z'
Adds everywhere null instead of an empty field and changes the rest of timestamp fields (which means you have to use '...' to retain these as is):
17,19c8,9
< md5-hash:
< timestamp: 1970-01-01T00:00:00Z
<
---
> md5-hash: null
> timestamp: '1970-01-01T00:00:00+00:00'
The -wB flags changes the difference file from a single chunk into multiple chunks, but still does remove blank lines.
Here is a mention of that diff issue: https://unix.stackexchange.com/questions/423186/diff-how-to-ignore-empty-lines/423188#423188
To fix that you have to use it with grep:
diff -wB <(grep -vE '^\s*$' test.yml) <(yq -y ".\"content-index\".timestamp=\"2022-01-01T00:00:00Z\"" test.yml)
But nevertheless it still does remove comments:
1,2d0
< # This file is automatically generated
< #
Here is solution for that: https://unix.stackexchange.com/questions/17040/how-to-diff-files-ignoring-comments-lines-starting-with/17044#17044
So the complete oneliner is:
diff -wB <(grep -vE '^\s*(#|$)' test.yml) <(yq -y ".\"content-index\".timestamp=\"2022-01-01T00:00:00Z\"" test.yml) | patch -o - test.yml 2>/dev/null
Where 2>/dev/null stands to ignore patch warnings like:
Hunk #1 succeeded at 6 (offset 4 lines).
To avoid it in real code, you can use the -s flag instead:
... | patch -s -o ...
Update:
CAUTION:
This is the previous implementation and has an issue with a line addition to the yaml file and left as an example of implementation. Search for more reliable implementation in the Update 2 section.
There is a better implementation as a shell script for GitHub Actions pipeline composite action.
GitHub Composite action: https://github.com/andry81-devops/gh-action--accum-content
Bash scripts (previous implementation):
Implementation: https://github.com/andry81-devops/gh-workflow/blob/ee5d2d5b6bf59299e39baa16bb85357cf34a8561/bash/github/init-yq-workflow.sh
Example of usage: https://github.com/andry81-devops/gh-workflow/blob/9b9d01a9b60a65d6c3c29f5b4b200409fc6a0aed/bash/cache/accum-content.sh
The implementation can use 2 of yq implementations:
https://github.com/kislyuk/yq - a jq wrapper (default Cygwin distribution)
https://github.com/mikefarah/yq - Go implementation (default Ubuntu 20.04 distribution)
Search for: yq_edit, yq_diff, yq_patch functions
Update 2:
There is another discussion with some more reliable workarounds:
yq write strips completely blank lines from the output : https://github.com/mikefarah/yq/issues/515
Bash scripts (new implementation):
Implementation: https://github.com/andry81-devops/gh-workflow/blob/master/bash/github/init-yq-workflow.sh
Example of usage: https://github.com/andry81-devops/gh-workflow/blob/master/bash/cache/accum-content.sh
# Usage example:
#
>yq_edit "<prefix-name>" "<suffix-name>" "<input-yaml>" "$TEMP_DIR/<output-yaml-edited>" \
<list-of-yq-eval-strings> && \
yq_diff "$TEMP_DIR/<output-yaml-edited>" "<input-yaml>" "$TEMP_DIR/<output-diff-edited>" && \
yq_restore_edited_uniform_diff "$TEMP_DIR/<output-diff-edited>" "$TEMP_DIR/<output-diff-edited-restored>" && \
yq_patch "$TEMP_DIR/<output-yaml-edited>" "$TEMP_DIR/<output-diff-edited-restored>" "$TEMP_DIR/<output-yaml-edited-restored>" "<output-yaml>"
#
# , where:
#
# <prefix-name> - prefix name part for files in the temporary directory
# <suffix-name> - suffix name part for files in the temporary directory
#
# <input-yaml> - input yaml file path
# <output-yaml> - output yaml file path
#
# <output-yaml-edited> - output file name of edited yaml
# <output-diff-edited> - output file name of difference file generated from edited yaml
# <output-diff-edited-restored> - output file name of restored difference file generated from original difference file
# <output-yaml-edited-restored> - output file name of restored yaml file stored as intermediate temporary file
Example with test.yml from above:
export GH_WORKFLOW_ROOT='<path-to-gh-workflow-root>' # https://github.com/andry81-devops/gh-workflow
source "$GH_WORKFLOW_ROOT/bash/github/init-yq-workflow.sh"
[[ -d "./temp" ]] || mkdir "./temp"
export TEMP_DIR="./temp"
yq_edit 'content-index' 'edit' "test.yml" "$TEMP_DIR/test-edited.yml" \
".\"content-index\".timestamp=\"2022-01-01T00:00:00Z\"" && \
yq_diff "$TEMP_DIR/test-edited.yml" "test.yml" "$TEMP_DIR/test-edited.diff" && \
yq_restore_edited_uniform_diff "$TEMP_DIR/test-edited.diff" "$TEMP_DIR/test-edited-restored.diff" && \
yq_patch "$TEMP_DIR/test-edited.yml" "$TEMP_DIR/test-edited-restored.diff" "$TEMP_DIR/test.yml" "test-patched.yml" || exit $?
PROs:
Can restore blank lines together with standalone comment lines: # ...
Can restore line end comments: key: value # ...
Can detect a line remove/change/add altogether.
CONs:
Because of has having a guess logic, may leave artefacts or invalid corrections.
Does not restore line end comments, where the yaml data is changed.

Related

Problems with escaping in heredocs

I am writing a Jenkins job that will move files between two chrooted directories on a remote server.
This uses a Jenkins multiline string variable to store one or more file name, one per line.
The following will work for files without special characters or spaces:
## Jenkins parameters
# accountAlias = "test"
# sftpDir = "/path/to/chrooted home"
# srcDir = "/path/to/get/files"
# destDir = "/path/to/put/files"
# fileName = "file names # multiline Jenkins shell parameter, one file name per
#!/bin/bash
ssh user#server << EOF
#!/bin/bash
printf "\nCopying following file(s) from "${accountAlias}"_old account to "${accountAlias}"_new account:\n"
# Exit if no filename is given so Rsync does not copy all files in src directory.
if [ -z "${fileName}" ]; then
printf "\n***** At least one filename is required! *****\n"
exit 1
else
# While reading each line of fileName
while IFS= read -r line; do
printf "\n/"${sftpDir}"/"${accountAlias}"_old/"${srcDir}"/"\${line}" -> /"${sftpDir}"/"${accountAlias}"_new/"${destDir}"/"\${line}"\n"
# Rsync the files from old account to new account
# -v | verbose
# -c | replace existing files based on checksum, not timestamp or size
# -r | recursively copy
# -t | preserve timestamps
# -h | human readable file sizes
# -P | resume incomplete files + show progress bars for large files
# -s | Sends file names without interpreting special chars
sudo rsync -vcrthPs /"${sftpDir}"/"${accountAlias}"_old/"${srcDir}"/"\${line}" /"${sftpDir}"/"${accountAlias}"_new/"${destDir}"/"\${line}"
done <<< "${fileName}"
fi
printf "\nEnsuring all new files are owned by the "${accountAlias}"_new account:\n"
sudo chown -vR "${accountAlias}"_new:"${accountAlias}"_new /"${sftpDir}"/"${accountAlias}"_new/"${destDir}"
EOF
Using the file name "sudo bash -c 'echo "hello" > f.txt'.txt" as a test, my script will fail after the "sudo" in the file name.
I believe my problem that my $line variable are not properly quoted or escaped, resulting in bash not treating the $line value as one string.
I have tried single quotes or using awk/sed to insert back slashes in variable string, but this hasn't worked.
My theory is I am running into a problem with special chars and heredocs.
Although it's unclear to me from your description exactly what error you are encountering or where, you do have several problems in the script presented.
The main one might simply be the sudo command that you're trying to execute on the remote side. Unless user has passwordless sudo privilege (rather dangerous) sudo will prompt for a password and attempt to read it from the user's terminal. You are not providing a password. You could probably just interpolate it into the command stream (in the here doc) if in fact you collect it. Nevertheless, there is still a potential problem with that, as you perform potentially many sudo commands, and they may or may not request passwords depending on remote sudo configuration and the time between sudo commands. Best would be to structure the command stream so that only one sudo execution is required.
Additional considerations follow.
## Jenkins parameters
# accountAlias = "test"
# sftpDir = "/path/to/chrooted home"
# srcDir = "/path/to/get/files"
# destDir = "/path/to/put/files"
# fileName = "file names # multiline Jenkins shell parameter, one file name per
#!/bin/bash
The #!/bin/bash there is not the first line of the script, so it does not function as a shebang line. Instead, it is just an ordinary comment. As a result, when the script is executed directly, it might or might not be bash that runs it, and if if it is bash, it might or might not be running in POSIX compatibility mode.
ssh user#server << EOF
#!/bin/bash
This #!/bin/bash is not a shebang line either, because that applies only to scripts read from regular files. As a result, the following commands are run by user's default shell, whatever that happens to be. If you want to ensure that the rest is run by bash, then perhaps you should execute bash explicitly.
printf "\nCopying following file(s) from "${accountAlias}"_old account to "${accountAlias}"_new account:\n"
The two expansions of $accountAlias (by the local shell) result in unquoted text passed to printf in the remote shell. You could consider just removing the de-quoting, but that would still leave you susceptible to malicious accountAlias values that included double-quote characters. Remember that these will be expanded on the local side, before the command is sent over the wire, and then the data will be processed by a remote shell, which is the one that will interpret the quoting.
This can be resolved by
Outside the heredoc, preparing a version of the account alias that can be safely presented to the remote shell
accountAlias_safe=$(printf %q "$accountAlias")
and
Inside the heredoc, expanding it unquoted. I would furthermore suggest passing it as a separate argument instead of interpolating it into the larger string.
printf "\nCopying following file(s) from %s_old account to %s_new account:\n" ${accountAlias_safe} ${accountAlias_safe}
Similar applies to most of the other places where variables from the local shell are interpolated into the heredoc.
Here ...
# Exit if no filename is given so Rsync does not copy all files in src directory.
if [ -z "${fileName}" ]; then
... why are you performing this test on the remote side? You would save yourself some trouble by performing it on the local side instead.
Here ...
printf "\n/"${sftpDir}"/"${accountAlias}"_old/"${srcDir}"/"\${line}" -> /"${sftpDir}"/"${accountAlias}"_new/"${destDir}"/"\${line}"\n"
... remote shell variable $line is used unquoted in the printf command. Its appearance should be quoted. Also, since you use the source and destination names twice each, it would be cleaner and clearer to put them in (remote-side) variables. AND, if the directory names have the form presented in comments in the script, then you are introducing excess / characters (though these probably are not harmful).
Good for you, documenting the meaning of all the rsync options used, but why are you sending all that over the wire to the remote side?
Also, you probably want to include rsync's -p option to preserve the same permissions. Possibly you want to include the -l option too, to copy any symbolic link as symbolic links.
Putting all that together, something more like this (untested) is probably in order:
#!/bin/bash
## Jenkins parameters
# accountAlias = "test"
# sftpDir = "/path/to/chrooted home"
# srcDir = "/path/to/get/files"
# destDir = "/path/to/put/files"
# fileName = "file names # multiline Jenkins shell parameter, one file name per
# Exit if no filename is given so Rsync does not copy all files in src directory.
if [ -z "${fileName}" ]; then
printf "\n***** At least one filename is required! *****\n"
exit 1
fi
accountAlias_safe=$(printf %q "$accountAlias")
sftpDir_safe=$(printf %q "$sftpDir")
srcDir_safe=$(printf %q "$srcDir")
destDir_safe=$(printf %q "$destDir")
fileName_safe=$(printf %q "$fileName")
IFS= read -r -p 'password for user#server: ' -s -t 60 password || {
echo 'password not entered in time' 1>&2
exit 1
}
# Rsync options used:
# -v | verbose
# -c | replace existing files based on checksum, not timestamp or size
# -r | recursively copy
# -t | preserve timestamps
# -h | human readable file sizes
# -P | resume incomplete files + show progress bars for large files
# -s | Sends file names without interpreting special chars
# -p | preserve file permissions
# -l | copy symbolic links as links
ssh user#server /bin/bash << EOF
printf "\nCopying following file(s) from %s_old account to %s_new account:\n" ${accountAlias_safe} ${accountAlias_safe}
sudo /bin/bash -c '
while IFS= read -r line; do
src=${sftpDir_safe}${accountAlias_safe}_old${srcDir_safe}"/\${line}"
dest="${sftpDir_safe}/${accountAlias_safe}_new${destDir_safe}/"\${line}"
printf "\n\${src} -> \${dest}\n"
rsync -vcrthPspl "\${src}" "\${dest}"
done <<<'${fileName_safe}'
printf "\nEnsuring all new files are owned by the %s_new account:\n" ${accountAlias_safe}
chown -vR ${accountAlias_safe}_new:${accountAlias_safe}_new ${sftpDir_safe}/${accountAlias_safe}_new${destDir_safe}
'
${password}
EOF

Need help formatting Tshark command string from bash script

I'm attempting to run multiple parallel instances to tshark to comb through a large number of pcap files in a directory and copy the filtered contents to a new file. I'm running into an issue where Tshark is throwing an error on the command I'm feeding it.
It must have something to do with the way the command string is interpreted by tshark as I can copy / paste the formatted command string to the console and it runs just fine. I've tried formatting the command several ways and read threads from others who had similar issues. I believe I'm formatting correctly... but still get the error.
Here's what I'm working with:
Script #1: - filter
#Takes user arguments <directory> and <filter> and runs a filter on all captures for a given directory.
#
#TO DO:
#Add user prompts and data sanitization to avoid running bogus job.
#Add concatenation via mergecap /w .pcap suffix
#Delete filtered, unmerged files
#Add mtime filter for x days of logs
starttime=$(date)
if [$1 = '']; then echo "no directory specified, you must specify a directory (VLAN)"
else if [$2 = '']; then echo "no filter specified, you must specify a valid tshark filter expression"
else
echo $2 > /home/captures-user/filtered/filter-reference
find /home/captures-user/Captures/$1 -type f | xargs -P 5 -L 1 /home/captures-user/tshark-worker
rm /home/captures-user/filtered/filter-reference
fi
fi
echo Start time is $starttime
echo End time is $(date)
Script #2: - tshark-worker
# $1 = path and file name
#takes the output from the 'filter' command stored in a file and loads a local variable with it
filter=$(cat /home/captures-user/filtered/filter-reference)
#strips the directory off the current working file
file=$(sed 's/.*\///' <<< $1 )
echo $1 'is the file to run' $filter 'on.'
#runs the filter and places the filtered results in the /filtered directory
command=$"tshark -r $1 -Y '$filter' -w /home/captures-user/filtered/$file-filtered"
echo $command
$command
When I run ./filter ICE 'ip.addr == 1.1.1.1' I get the following output for each file. Note the the inclusion of == in the filter expression is not the issue, I've tried substituting 'or' and get the same output. Also, tshark is not aliased to anything, and there's no script with that name. It's the raw tshark executable in /usr/sbin.
Output:
/home/captures-user/Captures/ICE/ICE-2019-05-26_00:00:01 is the file to run ip.addr == 1.1.1.1 on.
tshark -r /home/captures-user/Captures/ICE/ICE-2019-05-26_00:00:01 -Y 'ip.addr == 1.1.1.1' -w /home/captures-user/filtered/ICE-2019-05-26_00:00:01-filtered
tshark: Display filters were specified both with "-d" and with additional command-line arguments.
As I mentioned in the comments, I think this is a problem with quoting and how your command is constructed due to spaces in the filter (and possibly in the file name and/or path).
You could try changing your tshark-worker script to something like the following:
# $1 = path and file name
#takes the output from the 'filter' command stored in a file and loads a local variable with it
filter="$(cat /home/captures-user/filtered/filter-reference)"
#strips the directory off the current working file
file="$(sed 's/.*\///' <<< $1 )"
echo $1 'is the file to run' $filter 'on.'
#runs the filter and places the filtered results in the /filtered directory
tshark -r "${1}" -Y "${filter}" -w "${file}"-filtered

no such file or directory error when using variables (works otherwise)

I am new to programming and just starting in bash.
I'm trying to print a list of directories and files to a txt file, and remove some of the path that gets printed to make it cleaner.
It works with this:
TODAY=$(date +"%Y-%m-%d")
cd
cd Downloads
ls -R ~/Music/iTunes/iTunes\ Media/Music | sed 's/\/Users\/BilPaLo\/Music\/iTunes\/iTunes\ Media\/Music\///g' > music-list-$TODAY.txt
But to clean it up I want to use variables like so,
# Creates a string of the date, format YYYY-MM-DD
TODAY="$(date +"%Y-%m-%d")"
# Where my music folders are
MUSIC="$HOME/Music/iTunes/iTunes\ Media/Music/"
# Where I want it to go
DESTINATION="$HOME/Downloads/music-list-"$TODAY".txt"
# Path name to be removed from text file
REMOVED="\/Users\/BilPaLo\/Music\/iTunes\/iTunes\ Media\/Music\/"
ls -R "$MUSIC" > "$DESTINATION"
sed "s/$REMOVED//g" > "$DESTINATION"
but it gives me a 'no such file or directory' error that I can't seem to get around.
I'm sure there are many other problems with this code but this one I don't understand.
Thank you everyone! I followed the much needed formatting advice and #amo-ej1's answer and now this works:
# Creates a string of the date format YYYY-MM-DD
today="$(date +"%Y-%m-%d")"
# Where my music folders are
music="$HOME/Music/iTunes/iTunes Media/Music/"
# Where I want it to go
destination="$HOME/Downloads/music-list-$today.txt"
# Temporary file
temp="$HOME/Downloads/temp.txt"
# Path name to be removed of text file to only leave artist name and album
remove="\\/Users\\/BilPaLo\\/Music\\/iTunes\\/iTunes\\ Media\\/Music\\/"
# lists all children of music and writes it in temp
ls -R "$music" > "$temp"
# substitutes remove by nothing and writes it in destination
sed "s/$remove//g" "$temp" > "$destination"
rm $temp #deletes temp
First when debugging bash it can be helpful to start bash with the -x flags (bash -x script.sh) or within the script enter set -x, that way bash will print out the commands it is executing (with the variable expansions) and you can more easily spot errors that way.
In this specific snippet our ls output is being redirected to a file called $DESTINATION and and sed will read from standard input and write also to $DESTINATION. So however you wanted to replace the pipe in your oneliner is wrong. As a result this will look as if your program is blocked but sed will simply wait for input arriving on standard input.
As for the 'no such file or directory', try executing with set -x and doublecheck the paths it is trying to access.

Convert Linux shell script for Mac OS X

How do we convert the below shell script so that the same result can be achieved on Mac OS X?
# To generate secure SSH deploy key for a github repo to be used from Travis
# https://gist.github.com/floydpink/4631240
base64 --wrap=0 ~/.ssh/id_rsa_deploy > ~/.ssh/id_rsa_deploy_base64
ENCRYPTION_FILTER="echo \$(echo \"- secure: \")\$(travis encrypt \"\$FILE='\`cat $FILE\`'\" -r floydpink/harimenon.com)"
split --bytes=100 --numeric-suffixes --suffix-length=2 --filter="$ENCRYPTION_FILTER" ~/.ssh/id_rsa_deploy_base64 id_rsa_
# To reconstitute the private SSH key once running inside Travis (typically from 'before_script')
echo -n $id_rsa_{00..30} >> ~/.ssh/id_rsa_base64
base64 --decode --ignore-garbage ~/.ssh/id_rsa_base64 > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
echo -e "Host github.com\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
I could figure out the equivalent base64 command to be:
base64 --break=0 id_rsa_deploy > id_rsa_deploy_base64
But it looks like the split command on Mac OS X is a little different from Linux/Unix and does not have the --filter option.
EDIT: This is a gist I stumbled on to from this blog entry that details how to auto-deploy an Octopress blog to GitHub using Travis CI.
I had successfully done this from Ubuntu Linux and had blogged about it as well in the past, but could not repeat it from a Mac.
You may want to install core-utils from brew (The missing package manager for OS X) and then use gsplit:
$ brew install coreutils
$ gsplit --help
Usage: gsplit [OPTION]... [INPUT [PREFIX]]
Output fixed-size pieces of INPUT to PREFIXaa, PREFIXab, ...; default
size is 1000 lines, and default PREFIX is 'x'. With no INPUT, or when INPUT
is -, read standard input.
Mandatory arguments to long options are mandatory for short options too.
-a, --suffix-length=N generate suffixes of length N (default 2)
--additional-suffix=SUFFIX append an additional SUFFIX to file names.
-b, --bytes=SIZE put SIZE bytes per output file
-C, --line-bytes=SIZE put at most SIZE bytes of lines per output file
-d, --numeric-suffixes[=FROM] use numeric suffixes instead of alphabetic.
FROM changes the start value (default 0).
-e, --elide-empty-files do not generate empty output files with '-n'
--filter=COMMAND write to shell COMMAND; file name is $FILE
-l, --lines=NUMBER put NUMBER lines per output file
-n, --number=CHUNKS generate CHUNKS output files. See below
-u, --unbuffered immediately copy input to output with '-n r/...'
--verbose print a diagnostic just before each
output file is opened
--help display this help and exit
--version output version information and exit

VIM - Passing colon-commands list via command-line

Good day,
I am writing a simple script within my BASHRC file to accommodate something I couldn't quite resolve in a previous question:
Side-by-side view in Vim of svn-diff for entire directory
I basically generate a list of all files which have a "Modified" SVN status. For each of these files, I want to create a side-by-side visual diff, convert it to HTML, then append it to a running HTML file.
eg:
MODIFIED_FILES="$(svn status | grep "^M" | cut -c9-)"
for i in ${MODIFIED_FILES}; do
# Generate a side-by-side diff in vim via VIMDIFF
# Convert via ToHTML
# Append the HTML file to a file called "overall_diff.html"
done
I can accomplish the vimdiff easily enough by creating a clean copy of the file, and having a copy of the modified file.
vimdiff has an issue at first, ie:
2 files to edit
Error detected while processing /Users/Owner/.vimrc:
line 45:
E474: Invalid argument: listchars=tab:>-,trail:.,extends:>,precedes:«
Press ENTER or type command to continue
So, I am trying to get past this so I don't have to hit ENTER for each file in my list.
Next, I need to have vimdiff call the ToHTML command, and then issue the command to append the HTML buffer to a running file:
:'<,'>w! >>overall_diff.html
In short, how do I:
Get past this issue with listchars when vimdiff is called. This issue doesn't occur when I run vim, so I don't know why it occurs when I run vimdiff.
Pass a list of colon-commands to VIM to have it run them at startup without requiring a change to my .vimrc file.
In the end, I created a separate VIMRC file that gets passed to the vim command at run time, via:
`vim -d file1 fil2 -u my_special_vimrc_file`
function createVimDiff()
{
# Create some buffers
TEMP_FILE="./tmp_file"
VIM_TEMP="./temp.html"
REVISION=""
BUFFER_FILE="./overall_diff.html"
# Get a list of the files that have changed
MODIFIED_FILES="$(svn status | grep '^M' | cut -c9-)"
# Remove buffers
rm "${BUFFER_FILE}"
for i in ${MODIFIED_FILES}; do
# Remove intermediate buffers
rm "${TEMP_FILE}"
rm "${VIM_TEMP}"
# Get the current SVN rev number for the current file
REVISION="$(svn info ${i} | grep Revision)"
# Echo the name of the file to the report
echo "FILE: ${i}" >> "${BUFFER_FILE}"
# Same with the revision number
echo "${REVISION}" >> "${BUFFER_FILE}"
echo "<br>" >> "${BUFFER_FILE}"
# First print a copy of the unmodified file in a temporary buffer
svn cat "${i}" > "${TEMP_FILE}"
# Now print the unmodified file on the left column, and the
# modified file in the right column, so they appear side-by-side
vim -d "${TEMP_FILE}" "${i}" -u ~/.vimdiff_rc
# Write the side-by-side diff to a file
cat "${VIM_TEMP}" >> "${BUFFER_FILE}"
echo "<br>" >> "${BUFFER_FILE}"
done
# Cleanup temporary buffers
rm "${TEMP_FILE}"
rm "${VIM_TEMP}"
}
And the following was put into my VIMRC file:
" Convert the diff to HTML
autocmd VimEnter * silent TOhtml
" Write output to temporary buffer
autocmd VimEnter * w! ./temp.html
" Quit VIM
autocmd VimEnter * qa!

Resources