I often have other filing systems mounted on my Ubuntu machine, and as a result, when I do a find, I have to make sure I include the -mount option (which is the same as the -xdev option) to avoid it searching (often slowly) on those filing systems too. Sometimes, however, I forget; then I wonder why the find is taking so long! What I'd like is a way of making find use -mount all the time.
There doesn't seem to be an environment variable I can use, and there doesn't seem to be something like a .findrc file where I can specify options. And I can't easily use an alias because -mount needs to be after the location(s) I want to search. I could make a bash function that takes the arguments for the search and then inserts -mount after all the locations but before the first switch before passing it to the find command; but before I go to the effort, is there already a way of ensuring find uses -mount every time it's run?
In case it's useful to anyone else, here the bash function I came up with:
find () {
# Make a copy of the arguments so they can be altered
local args=("$#")
# Will start to look at the first (zeroth) element
local i=0
# While element i exists and doesn't start with a "-", increment i
while [[ $i -lt ${#args[#]} && ${args[$i]} != -* ]]; do
let ++i
done
# Insert -mount at position i, which is where the first switch currently is
# (or is the end of the argument list).
args=("${args[#]:0:$i}" '-mount' "${args[#]:$i}")
# Use env to locate the find command in the path, and pass the manipulated
# arguments to it.
env find "${args[#]}"
}
Related
I have a script that I call with an application, I can't run it from command line. I derive the directory where the script is called and in the next variable go up 1 level where my files are stored. From there I have 3 variables with the full path and file names (with wildcard), which I will refer to as "masks".
I need to find and "do something with" (copy/write their names to a new file, whatever else) to each of these masks. The do something part isn't my obstacle as I've done this fine when I'm working with a single mask, but I would like to do it cleanly in a single loop instead of duplicating loop and just referencing each mask separately if possible.
Assume in my $FILESFOLDER directory below that I have 2 existing files, aaa0.csv & bbb0.csv, but no file matching the ccc*.csv mask.
#!/bin/bash
SCRIPTFOLDER=${0%/*}
FILESFOLDER="$(dirname "$SCRIPTFOLDER")"
ARCHIVEFOLDER="$FILESFOLDER"/archive
LOGFILE="$SCRIPTFOLDER"/log.txt
FILES1="$FILESFOLDER"/"aaa*.csv"
FILES2="$FILESFOLDER"/"bbb*.csv"
FILES3="$FILESFOLDER"/"ccc*.csv"
ALLFILES="$FILES1
$FILES2
$FILES3"
#here as an example I would like to do a loop through $ALLFILES and copy anything that matches to $ARCHIVEFOLDER.
for f in $ALLFILES; do
cp -v "$f" "$ARCHIVEFOLDER" > "$LOGFILE"
done
echo "$ALLFILES" >> "$LOGFILE"
The thing that really spins my head is when I run something like this (I haven't done it with the copy command in place) that log file at the end shows:
filesfolder/aaa0.csv filesfolder/bbb0.csv filesfolder/ccc*.csv
Where I would expect echoing $ALLFILES just to show me the masks
filesfolder/aaa*.csv filesfolder/bbb*.csv filesfolder/ccc*.csv
In my "do something" area, I need to be able to use whatever method to find the files by their full path/name with the wildcard if at all possible. Sometimes my network is down for maintenance and I don't want to risk failing a change directory. I rarely work in linux (primarily SQL background) so feel free to poke holes in everything I've done wrong. Thanks in advance!
Here's a light refactoring with significantly fewer distracting variables.
#!/bin/bash
script=${0%/*}
folder="$(dirname "$script")"
archive="$folder"/archive
log="$folder"/log.txt # you would certainly want this in the folder, not $script/log.txt
shopt -s nullglob
all=()
for prefix in aaa bbb ccc; do
cp -v "$folder/$prefix"*.csv "$archive" >>"$log" # append, don't overwrite
all+=("$folder/$prefix"*.csv)
done
echo "${all[#]}" >> "$log"
The change in the loop to append the output or cp -v instead of overwrite is a bug fix; otherwise the log would only contain the output from the last loop iteration.
I would probably prefer to have the files echoed from inside the loop as well, one per line, instead of collect them all on one humongous line. Then you can remove the array all and instead simply
printf '%s\n' "$folder/$prefix"*.csv >>"$log"
shopt -s nullglob is a Bash extension (so won't work with sh) which says to discard any wildcard which doesn't match any files (the default behavior is to leave globs unexpanded if they don't match anything). If you want a different solution, perhaps see Test whether a glob has any matches in Bash
You should use lower case for your private variables so I changed that, too. Notice also how the script variable doesn't actually contain a folder name (or "directory" as we adults prefer to call it); fixing that uncovered a bug in your attempt.
If your wildcards are more complex, you might want to create an array for each pattern.
tmpspaces=(/tmp/*\ *)
homequest=($HOME/*\?*)
for file in "${tmpspaces[#]}" "${homequest[#]}"; do
: stuff with "$file", with proper quoting
done
The only robust way to handle file names which could contain shell metacharacters is to use an array variable; using string variables for file names is notoriously brittle.
Perhaps see also https://mywiki.wooledge.org/BashFAQ/020
I am trying to write a simple Bash completion script for a program that runs its arguments as a command. A good example of this is kind of program is the prime-run script provided by the nvidia-prime package:
#!/bin/bash
__NV_PRIME_RENDER_OFFLOAD=1 __VK_LAYER_NV_optimus=NVIDIA_only __GLX_VENDOR_LIBRARY_NAME=nvidia "$#"
This script sets a few environment variables, which instructs the prime driver to use the Nvidia dGPU on a hybrid system. The first argument is treated as the command, and all trailing arguments are passed through. So for example you can run prime-run code . and VSCode will start in the current directory using the dGPU.
Therefore from a completion-script POV, what we want is to basically try to complete as if the prime-run token isn't there (hence "transparent proxy"-like behaviour). To give a rather contrived example:
> prime-run journalc<TAB>
(completes journalctl)
> prime-run journalctl --us<TAB>
(completes --user)
However I am finding this surprisingly difficult in Bash (not that I know how in other shells). So the question is simple: is it possible and if so how?
Ideas I've (hopelessly) had
The simple complete -A command prime-run: the first argument gets completed as a command as expected (let's call it foo), but the following arguments are also completed as commands rather than as arguments to foo
Use some combination of compgen and complete -p to invoke the completion function of foo, but AFAIK the completion function for all foo is locally defined and thus uncallable
TL;DR
bash-completion provides a function named _command_offset (permalink), which is exactly what I need.
# A meta-command completion function for commands like sudo(8), which need to
# first complete on a command, then complete according to that command's own
# completion definition.
Keep reading if you are interested in how I got here.
So I was daydreaming the other day, when it hit me - doesn't sudo basically have the exact same behaviour I want? So the task became simple - reverse engineer the completion script for sudo. Source available here: permalink.
Turns out, most of the code has to do with completing the various options, so it's safe to simply throw most of it out:
L 8-11, 50-52: Related to sudo's edit mode. Safe to ditch.
L 19-24, 27-39, 43-49: These complete sudo's options. Safe to ditch.
So we're left with this:
_sudo()
{
local cur prev words cword split
_init_completion -s || return
for ((i = 1; i <= cword; i++)); do
if [[ ${words[i]} != -* ]]; then
local PATH=$PATH:/sbin:/usr/sbin:/usr/local/sbin
local root_command=${words[i]}
_command_offset $i
return
fi
done
$split && return
} &&
complete -F _sudo sudo sudoedit
The for and if block are there to deal with sudo's options that precede the "guest command". Safe to ditch (after replacing all $i with 1).
The variable $split is only referenced in _init_completion (permalink), and it seems to be used for handling different argument styles (--foo=bar v.s. --foo bar). Same with the -s flag. Irrelevant.
Appending to $PATH and setting $root_command have to do with privilege escalation. Only relevant to sudo.
So after the dust has cleared, by process of elimination, I ended up with this simple chunk of code:
_my-script()
{
local cur prev words cword
_init_completion || return
_command_offset 1
} && complete -F _my-script my-script
Declaring these four local variables and calling _init_completion is standard for all completion scipts, so really it's as simple as one command. Of course someone had to write the massively-complex _command_offset function so lucky me I guess?
Anyways, thank you for reading the story of me messing around and hopefully this will be helpful to some other person in the future.
In order to avoid ad-hoc setting of my PATH by the usual technique of blindly appending - I started hacking some code to prepend items to my path (asdf path for example).
pathprepend() {
for ARG in "$#"
do
export PATH=${${PATH}/:$"ARG"://}
export PATH=${${PATH}/:$"ARG"//}
export PATH=${${PATH}/$"ARG"://}
export PATH=$ARG:${PATH}
done
}
It's invoked like this : pathprepend /usr/local/bin and /usr/local/bin gets prepended to PATH. The script is also supposed to cleanly remove /usr/local/bin from it's original position in PATH (which it does, but not cleanly)(dodgy regex).
Can anyone recomend a cleaner way to do this? The shell (bash) regex support is a bit limited. I'd much rather split into an array and delete the redundant element, but wonder how portable either that or my implementation is. My feeling is, not particularly.
If you want to split PATH into an array, that can be done like so:
IFS=: eval 'arr=($PATH)'
This creates an array, arr, whose elements are the colon-delimited elements of the PATH string.
However, in my opinion, that doesn't necessarily make it easier to do what you want to do. Here's how I would prepend to PATH:
for ARG in "$#"
do
while [[ $PATH =~ :$ARG: ]]
do
PATH=${PATH//:$ARG:/:}
done
PATH=${PATH#$ARG:}
PATH=${PATH%:$ARG}
export PATH=${ARG}:${PATH}
done
This uses bash substitution to remove ARG from the middle of PATH, remove ARG from the beginning of PATH, remove ARG from the end of PATH, and finally prepend ARG to PATH. This approach has the benefit of removing all instances of ARG from PATH in cases where it appears multiple times, ensuring the only instance will be at the beginning after the function has executed.
How can I search an arbitrary path and determine if it has two folder names? The folder names can appear in any position in either order. Not a shell expert so seeking help here.
if [ -p "$PATH" ]; then
echo "path is set"
else
echo "path is not set"
fi
I found this segment but I'm not sure it's useful. $PATH is a special variable correct?
First, let me make sure I understand the question right. You have some path (like "/home/sam/foo/bar/baz") and you want to test whether it contains two specific directory names (e.g. "foo" and "bar") in either order, right? So, looking for "foo" and "bar":
/home/sam/foo/bar/baz would match
/mnt/bar/subdir/foo would also match
/mnt/bar/foo2 would not match, because "foo2" is not "foo"
If that's correct, you can do this in bash as two tests:
dir1="foo"
dir2="bar"
if [[ "/$path/" = *"/$dir1/"* && "/$path/" = *"/$dir2/"* ]]; then
echo "$path" contains both $dir1 and $dir2"
else
echo "$path" does not contain both $dir1 and $dir2"
fi
Notes:
This is using the [[ ]] conditional expression, which is different from [ ] and not available in basic shells. If you use this type of expression, you need to start the shell script with a shebang that tells the OS to run it with bash, not a generic shell (i.e. the first line should be either #!/bin/bash or #!/usr/bin/env bash), and do not run it with the sh command (that will override the shebang).
The way the comparison works is that it sees whether the path matches both the patterns *"/$dir1/"* and *"/$dir2/"* -- that is, it matches those names, with a slash at each end, maybe with something else (*) before and after. But since the path might not start and/or end with a slash, we add them ("/$path/") to make sure they're there.
Do not use PATH as a variable in your script -- it's a very special variable that tells the shell where to find executable commands. If you ever use it for anything else, your script will suddenly start getting "command not found" errors. Actually, there are a bunch of all-caps special-meaning variables; to avoid conflicts with them, use lowercase or mixed-case variables for your things.
I'm writing a quick shell script to build and execute my programs in one fell swoop.
I've gotten that part down, but I'd like to include a little if/else to catch bad extensions - if it's not an .adb (it's an Ada script), it won't let the rest of the program execute.
My two-part question is:
How do I grab just the extension? Or is it easier to just say *.adb?
What would the if/else statement look like? I have limited experience in Bash so I understand that's a pretty bad question.
Thanks!
There are ways to extract the extension, but you don't really need to:
if [[ $filename == *.adb ]] ; then
. . . # this code is run if $filename ends in .adb
else
. . . # this code is run otherwise
fi
(The trouble with extracting the extension is that you'd have to define what you mean by "extension". What is the extension of a file named foo? How about a file named report.2012.01.29? So general-purpose extension-extracting code is tricky, and not worth it if your goal is just to confirm that file has a specific extension.)
There are multiple ways to do it. Which is best depends in part on what the subsequent operations will be.
Given a variable $file, you might want to test what the extension is. In that case, you probably do best with:
extn=${file##*.}
This deletes everything up to the last dot in the name, slashes and all, leaving you with adb if the file name was adafile.adb.
If, on the other hand, you want to do different things depending on the extension, you might use:
case "$file" in
(*.adb) ...do things with .adb files;;
(*.pqr) ...do things with .pqr files;;
(*) ...cover the rest - maybe an error;;
esac
If you want the name without the extension, you can do things the more traditional way with:
base=$(basename $file .adb)
path=$(dirname $file)
The basename command gives you the last component of the file name with the extension .adb stripped off. The dirname command gives you the path leading to the last component of the file name, defaulting to . (the current directory) if there is no specified path.
The more recent way to do those last two operations is:
base=${file##/}
path=${file%/*}
The advantage of these is that they are built-in operations that do not invoke a separate executable, so they are quicker. The disadvantage of the built-ins is that if you have a name that ends with a slash, the built-in treats it as significant but the command does not (and the command is probably giving you the more desirable behaviour, unless you want to argue GIGO).
There are other techniques available too. The expr command is an old, rather heavy-weight mechanism that would not normally be used (but it is very standard). There may be other techniques using the (( ... )), $(( ... )) and [[ ... ]] operators to evaluate various sorts of expression.
To get just the extension from the file path and name, use parameter expansion:
${filename##*.} # deletes everything to the last dot
To compare it with the string adb, just do
if [[ ${filename##*.} != adb ]] ; then
echo Invalid extension at "$filename".
exit 1
fi
or, using 'else`:
if [[ ${filename##*.} != adb ]] ; then
echo Invalid extension at "$filename".
else
# Run the script...
fi
Extension:
fileext=`echo $filename | sed 's_.*\.__'`
Test
if [[ x"${fileext}" = "xadb" ]] ; then
#do something
fi