Bash complete function - Separating completion parts with character other than space - bash

I've written a bash completion script to essentially do file/directory completion, but using . as the separator instead of /. However, it's not behaving as I expect it to.
Before I dive further, does anyone know of any options for this, or something that's already been written that can do this? The motivation for this is to enable completion when calling python with the -m flag. It seems crazy that this doesn't exist yet, but I was unable to find anything relevant.
My issue is that bash doesn't recognize . as a separator for completion options, and won't show the next options until I add an additional space to the end of the current command.
Here's a few concrete examples, given this directory structure.
/module
/script1.py
/script2.py
For instance, when I use the ls command, it works like this
$ ls mo<TAB>
$ ls module/<TAB><TAB>
script1.py script2.py
However, with my function, it's working like this:
$ python -m mod<TAB>
$ python -m module.<TAB><TAB>
module.
So instead of showing the next entries, it just shows the finished string again. However, if I add a space, it then works, but I don't want it to include the space:
$ python -m mod<TAB>
$ python -m module. <TAB><TAB> # (note the space here after the dot)
script1 script2 # (Note, I'm intentionally removing the file extension here).
I'd like the completion to act just like the bottom example, except not be forced to include the space to go to the next set of options
I've got about 50 tabs open and I've tried a bunch of recommendations, but nothing seems to be able to solve this how I'd like. There are a few other caveats here that would take a lot of time to go through, so I'm happy to expand on any other points if I've skipped something important. I've attached my code below, any help would be greatly appreciated. Thanks!
#!/bin/bash
_python_target() {
local cur opts cur_path
# Retrieving the current typed argument
cur="${COMP_WORDS[COMP_CWORD]}"
# Preparing an array to store available list for completions
# COMREPLY will be checked to suggest the list
COMPREPLY=()
# Here, we'll only handle the case of "-m"
# Hence, the classic autocompletion is disabled
# (ie COMREPLY stays an empty array)
if [[ "${COMP_WORDS[1]}" != "-m" ]]
then
return 0
fi
# add each path component to the current path to check for additional files
cur_path=""
for word in ${COMP_WORDS[#]:2:COMP_CWORD-2}; do
path_component=$(echo ${word} | sed 's/\./\//g')
cur_path="${cur_path}${path_component}"
done
cur_path="./${cur_path}"
if [[ ! -f "$cur_path" && ! -d "$cur_path" ]]; then
return 0
fi
# this is not very pretty, but it works. Open to comments on this too
file_opts="$(find ${cur_path} -name "*.py" -type f -maxdepth 1 -print0 | xargs -0 basename -a | sed 's/\.[^.]*$//')"
dir_opts="$(find ${cur_path} ! -path ${cur_path} -type d -maxdepth 1 -print0 | xargs -0 basename -a | xargs -I {} echo {}.)"
opts="${file_opts} ${dir_opts}"
# We store the whole list by invoking "compgen" and filling
# COMREPLY with its output content.
COMPREPLY=($(compgen -W "$opts" -- "$cur"))
[[ $COMPREPLY == *\. ]] && compopt -o nospace
}
complete -F _python_target python

Here's a draft example:
_python_target()
{
local cmd=$1 cur=$2 pre=$3
if [[ $pre != -m ]]; then
return
fi
local cur_slash=${cur//./\/}
local i arr arr2
arr=( $( compgen -f "$cur_slash" ) )
arr2=()
for i in "${arr[#]}"; do
if [[ -d $i ]]; then
arr2+=( "$i/" )
elif [[ $i == *.py ]]; then
arr2+=( "${i%.py}" )
fi
done
arr2=( "${arr2[#]//\//.}" )
COMPREPLY=( $( compgen -W "${arr2[*]}" -- "$cur" ) )
}
complete -o nospace -F _python_target python
Try with the python-2.7.18 source code directory:

Related

check whether a directory contains all (and only) the listed files

I'm writing unit-tests to test file-IO functions.
There's no formalized test-framework in my target language, so my idea is to run a little test program that somehow manipulates files in a test-directory, and after that checks the results in a little shell script.
To evaluate the output, I want to check a given directory whether all expected files are there and no other files have been created during the test.
My first attempt goes like this:
set -e
test -e "${test_dir}/a.txt"
test -e "${test_dir}/b.txt"
test -d "${test_dir}/dir"
find "${test_dir}" -mindepth 1 \
-not -wholename "${test_dir}/a.txt" \
-not -wholename "${test_dir}/b.txt" \
-not -wholename "${test_dir}/dir" \
| grep . && exit 1 || true
This properly detects whether there are two files a.txt and b.txt, and a subdirectory dir/ in the ${test_dir}.
If there happens to be a file c.txt, the test should and will fail.
However, this doesn't scale well.
There are dozens of unit-tests and each has a different set of files/directories, so I find myself repeating lines very similar to the above again and again.
So I'd rather wrap the above into a function call like so:
if checkdirectory "${test_dir}" a.txt b.txt dir/ dir/subdir/ dir/.hidden.txt; then
echo "ok"
else
echo "ko"
fi
Unfortunately I have no clue how to implement checkdirectory (esp. the find invocation with multiple -not -wholename ... stanzas gives me headache).
To add a bit of fun, the constraints are:
support both (and differentiate between) files and directories
must (EDITed from should) run on Linux, macOS & MSYS2/MinGW, therefore:
POSIX if possible (in reality it will be bash, but probably bash<<4! so no fancy features)
EDIT
some more constraints (these didn't make it into original my late-night question; so just consider them "extra constraints for bonus points")
the test-directory may contain subdirectories and files in subdirectories (up to an arbitrary depth), so any check needs to operate on more than just the top-level directory
ideally, the paths may contain weirdo characters like spaces, linebreaks,... (this is really unit-testing. we do want to test for such cases)
the testdir is more often than not some randomly generated directory using mktemp -d, so it would be nice if we could avoid hardcoding it in the tests
no assumptions about the underlying filesystem can be made.
Assuming we have a directory tree as an example:
$test_dir/a.txt
$test_dir/b.txt
$test_dir/dir/c.txt
$test_dir/dir/"d e".txt
$test_dir/dir/subdir/
then would you please try:
#!/bin/sh
checkdirectory() {
local i
local count
local testdir=$1
shift
for i in "$#"; do
case "$i" in
*/) [ -d "$testdir/$i" ] || return 1 ;; # check if the directory exists
*) [ -f "$testdir/$i" ] || return 1 ;; # check if the file exists
esac
done
# convert each filename to just a newline, then count the lines
count=`find "$testdir" -mindepth 1 -printf "\n" | wc -l`
[ "$count" -eq "$#" ] || return 1
return 0
}
if checkdirectory "$test_dir" a.txt b.txt dir/ dir/c.txt "dir/d e.txt" dir/subdir/; then
echo "ok"
else
echo "ko"
fi
One easy fast way would be to compare the output of find with a reference string:
Lets start with an expected directory and files structure:
d/FolderA/filexx.csv
d/FolderA/filexx.doc
d/FolderA/Sub1
d/FolderA/Sub2
testassert
#!/usr/bin/env bash
assertDirContent() {
read -r -d '' s < <(find "$1" -printf '%y %p\n')
[ "$2" = "$s" ]
}
testref='d d/FolderA/
f d/FolderA/filexx.csv
f d/FolderA/filexx.doc
d d/FolderA/Sub1
d d/FolderA/Sub2'
if assertDirContent 'd/FolderA/' "$testref"; then
echo 'ok'
else
echo 'Directory content assertion failed'
fi
Testing it:
$ ./testassert
ok
$ touch d/FolderA/unwantedfile
$ ./testassert
Directory content assertion failed
$ rm d/FolderA/unwantedfile
$ ./testassert
ok
$ rmdir d/FolderA/Sub1
$ ./testassert
Directory content assertion failed
$ mkdir d/FolderA/Sub1
$ ./testassert
ok
$ rmdir d/FolderA/Sub2
# Replace with a file instead of a directory
touch d/FolderA/Sub2
$ ./testassert
Directory content assertion failed
Now if you add timestamps and other info like permissions, owner, group to the find -printf output, you can also check all these matches the asserted string output.
I don't know what you mean by differentiating between files and directories since your last if statement is somehow binary. Here's what worked for me:
#! /bin/bash
function checkdirectory()
{
test_dir="${1}"
shift
content="$#"
for file in ${content}
do
[[ -z "${test_dir}/${file}" ]] && return 1
done
# -I is meant to be appended to "ls" to ignore the files in order to check if other files exist.
matched=" -I ${content// / -I } -I ${test_dir}"
[[ -e `ls $matched` ]] && return 1
return 0
}
if checkdirectory /some/directory a.txt b.txt dir; then
echo "ok"
else
echo "ko"
fi
Here's a possible solution i dreamed up during the night.
It destroys the test-data, so might not be usable in many cases (though it might just work for paths generated on-the-fly during unit tests):
checkdirectory() {
local i
local testdir=$1
shift
# try to remove all the listed files
for i in "$#"; do
if [ "x${i}" = "x${i%/}" ]; then
rm "${testdir}/${i}" || return 1
fi
done
# the directories should now be empty,
# so try to remove those dirs that are listed
for i in "$#"; do
if [ "x${i}" != "x${i%/}" ]; then
rmdir "${testdir}/${i}" || return 1
fi
done
# finally ensure that no files are left
if find "${testdir}" -mindepth 1 | grep . >/dev/null ; then
return 1
fi
return 0
}
When invoking the checkdirectory function, deeper directories must come first (that is checkdirectory foo/bar/ foo/ rather than checkdirectory foo/ foo/bar/).

Bash script to download from google images

Some weeks ago I found in this site a very useful bash script that downloads images from google image results (download images from google with command line)
Although the script is quite complicate for me, I did some simple modifications so as not to rename the results so as to keep the original names.
However, since the last week, the script stopped working... probably Google updated the code or something, and the regexes of the script don't parse the results any more. I don't know enough about google's codes, web programing or regexing to see what is wrong, although I did some educated guesses, but still didn't work.
My (unworking) tweaked script is this
#! /bin/bash
# function to create all dirs til file can be made
function mkdirs {
file="$1"
dir="/"
# convert to full path
if [ "${file##/*}" ]; then
file="${PWD}/${file}"
fi
# dir name of following dir
next="${file#/}"
# while not filename
while [ "${next//[^\/]/}" ]; do
# create dir if doesn't exist
[ -d "${dir}" ] || mkdir "${dir}"
dir="${dir}/${next%%/*}"
next="${next#*/}"
done
# last directory to make
[ -d "${dir}" ] || mkdir "${dir}"
}
# get optional 'o' flag, this will open the image after download
getopts 'o' option
[[ $option = 'o' ]] && shift
# parse arguments
count=${1}
shift
query="$#"
[ -z "$query" ] && exit 1 # insufficient arguments
# set user agent, customize this by visiting http://whatsmyuseragent.com/
useragent='Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:31.0) Gecko/20100101 Firefox/31.0'
# construct google link
link="www.google.cz/search?q=${query}\&tbm=isch"
# fetch link for download
imagelink=$(wget -e robots=off --user-agent "$useragent" -qO - "$link" | sed 's/</\n</g' | grep '<a href.*\(png\|jpg\|jpeg\)' | sed 's/.*imgurl=\([^&]*\)\&.*/\1/' | head -n $count | tail -n1)
imagelink="${imagelink%\%*}"
# get file extention (.png, .jpg, .jpeg)
ext=$(echo $imagelink | sed "s/.*\(\.[^\.]*\)$/\1/")
# set default save location and file name change this!!
dir="$PWD"
file="google image"
# get optional second argument, which defines the file name or dir
if [[ $# -eq 2 ]]; then
if [ -d "$2" ]; then
dir="$2"
else
file="${2}"
mkdirs "${dir}"
dir=""
fi
fi
# construct image link: add 'echo "${google_image}"'
# after this line for debug output
google_image="${dir}/${file}"
# construct name, append number if file exists
if [[ -e "${google_image}${ext}" ]] ; then
i=0
while [[ -e "${google_image}(${i})${ext}" ]] ; do
((i++))
done
google_image="${google_image}(${i})${ext}"
else
google_image="${google_image}${ext}"
fi
# get actual picture and store in google_image.$ext
wget --max-redirect 0 -q "${imagelink}"
# if 'o' flag supplied: open image
[[ $option = "o" ]] && gnome-open "${google_image}"
# successful execution, exit code 0
exit 0
one way to invetigate : provide -x option to bash so to have the trace of your script; that is change /bin/bash to /bin/bash -x in your script -or- simply invoke your script with
bash -x <yourscript>
You can also annotate your script with echo commands to track some variables.

Error in attempting to parallel task of a bash script

I am trying to parallel the task of rpw_gen_features in the following bash script:
#!/bin/bash
maxjobs=8
jobcounter=0
MYDIR="/home/rasoul/workspace/world_db/journal/for-training"
DIR=$1
FILES=`find $MYDIR/${DIR}/${DIR}\_*.hpl -name *.hpl -type f -printf "%f\n" | sort -n -t _ -k 2`
for f in $FILES; do
fileToProcess=$MYDIR/${DIR}/$f
# construct .pfl file name
filebasename="${f%.*}"
fileToCheck=$MYDIR/${DIR}/$filebasename.pfl
# check if the .pfl file is already generated
if [ ! -f $fileToCheck ];
then
echo ../bin/rpw_gen_features -r $fileToProcess &
jobcounter=jobcounter+1
fi
if [jobcounter -eq maxjobs]
wait
jobcounter=0
fi
done
but it generates some error at runtime:
line 20: syntax error near unexpected token `fi'
I'm not an expert in bash programming, so please feel free to comment on the whole code.
I am curious why you don't just use GNU Parallel:
MYDIR="/home/rasoul/workspace/world_db/journal/for-training"
DIR=$1
find $MYDIR/${DIR}/${DIR}\_*.hpl -name *.hpl -type f |
parallel '[ ! -f {.}.pfl ] && echo ../bin/rpw_gen_features -r {}'
Or even:
MYDIR="/home/rasoul/workspace/world_db/journal/for-training"
parallel '[ ! -f {.}.pfl ] && echo ../bin/rpw_gen_features -r {}' ::: $MYDIR/$1/$1\_*.hpl
It seems to be way more readable, and it will automatically scale when you move from an 8-core to a 64-core machine.
Watch the intro video for a quick introduction:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). You command line
with love you for it.
You are missing a then, spaces and ${} around the variables:
if [jobcounter -eq maxjobs]
wait
jobcounter=0
fi
Should be
if [ ${jobcounter} -eq ${maxjobs} ]; then
wait
jobcounter=0
fi
Further, you need to double check your script as I can see many missing ${} for example:
jobcounter=jobcounter+1
Even if you use the variables correctly this still will not work:
jobcounter=${jobcounter}+1
Will yield:
1
1+1
1+1+1
And not what you expect. You need to use:
jobcounter=`expr $jobcounter + 1`
With never versions of BASH you should be able to do:
(( jobcounter++ ))

How to split the file path to extract the various subfolders into variables? (Ubuntu Bash)

I need help with Ubuntu Precise bash script.
I have several tiff files in various folders
masterFOlder--masterSub1 --masterSub1-1 --file1.tif
|--masterSub1-2 --masterSub1-2-1 --file2.tif
|
|--masterSub2 --masterSub1-2 .....
I need to run an Imagemagick command and save them to new folder "converted" while retaining the sub folder tree i.e. the new tree will be
converted --masterSub1 --masterSub1-1 --file1.png
|--masterSub1-2 --masterSub1-2-1 --file2.png
|
|--masterSub2 --masterSub1-2 .....
How do i split the filepath into folders, replace the first folder (masterFOlder to converted) and recreate a new file path?
Thanks to everyone reading this.
This script should work.
#!/bin/bash
shopt -s extglob && [[ $# -eq 2 && -n $1 && -n $2 ]] || exit
MASTERFOLDER=${1%%+(/)}/
CONVERTFOLDER=$2
OFFSET=${#MASTERFOLDER}
while read -r FILE; do
CPATH=${FILE:OFFSET}
CPATH=${CONVERTFOLDER}/${CPATH%.???}.png
CDIR=${CPATH%/*}
echo "Converting $FILE to $CPATH."
[[ -d $CDIR ]] || mkdir -p "$CDIR" && echo convert "$FILE" "$CPATH" || echo "Conversion failed."
done < <(exec find "${MASTERFOLDER}" -mindepth 1 -type f -iname '*.tif')
Just replace echo convert "$FILE" "$CPATH" with the actual command you use and run bash script.sh masterfolder convertedfolder

custom directory completion appends whitespace

I have the following directory structure:
/home/tichy/xxx/yyy/aaa
/home/tichy/xxx/yyy/aab
/home/tichy/xxx/yyy/aac
I would like to enter cdw y<TAB> and get cdw yyy/<CURSOR> as a result, so I could add cdw yyy/a<TAB> and get cdw yyy/aa<CURSOR>
The solution I came up with gives me the following:
cdw y<TAB> => cdw yyy<SPACE><CURSOR>
Following code I have so far:
_cdw () {
local cur prev dirs
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
COMPREPLY=($(compgen -d -- /home/tichy/xxx/${cur}|perl -pe 's{^/home/tichy/xxx/}{}'))
# no difference, a bit more logical:
dirs=$(compgen -o nospace -d /home/tichy/xxx/${cur}|perl -pe 's/{^/home/tichy/xxx/}{}')
COMPREPLY=($(compgen -d -W ${dir} ${cur}|perl -pe 's{^/home/tichy/xxx/}{}'))
return 0
}
complete -F _cdw cdw
cdw () {
cd /home/tichy/xxx/$#
}
Any ideas what's wrong? It seems to me that the completion process seems to be finished and isn't expecting any more input.
The simplest solution I've found so far is to generate completions that look like this:
COMPREPLY=( $( compgen -W "file1 file2 file3 dir1/ dir2/ dir3/" ) )
and add this line just before returning
[[ $COMPREPLY == */ ]] && compopt -o nospace
This sets the nospace option whenever the completion may fill in until the slash so that the user ends up with:
cmd dir1/<cursorhere>
instead of:
cmd dir1/ <cursorhere>
and it doesn't set the nospace option whenever the completion may fill in until a full filename so that the user ends up with:
cmd file1 <cursorhere>
instead of:
cmd file1<cursorhere>
If I understand correctly, you want to bash-autocomplete a directory name, and not have the extra space? (That's what I was looking for when I got to this page).
If so, when you register the completion function, use "-o nospace".
complete -o nospace -F _cdw cdw
I don't know if nospace works on compgen.
How about something like this:
COMPREPLY=( $(cdw; compgen -W "$(for d in ${cur}* ${cur}*/*; do [[ -d "$d" ]] && echo $d/; done)" -- ${cur}) )
(I'm not sure if you can call your shell function from here or not, otherwise you may have to duplicate it a bit.)
This also gets rid of your perl hack :-)
completion provides a solution for this without any workaround: funciton _filedir defined in /etc/bash_completion:
626 # This function performs file and directory completion. It's better than
627 # simply using 'compgen -f', because it honours spaces in filenames.
628 # #param $1 If `-d', complete only on directories. Otherwise filter/pick only
629 # completions with `.$1' and the uppercase version of it as file
630 # extension.
631 #
632 _filedir()
Then, specifying the following is enough:
_cdw () {
local cur prev dirs
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
_filedir # add -d at the end to complete only dirs, no files
return 0
}

Resources