How to make a vim variable determinable by bash? [duplicate] - bash

This question already has answers here:
How to check if a files exists in a specific directory in a bash script?
(3 answers)
Closed 4 years ago.
I'm not sure how to word my question exactly...
I have the code
if grep "mynamefolder" /vol/Homefs/
then
echo "yup"
else
echo "nope"
fi
which gives me the output
grep: /vol/Homefs/: Is a directory
nope
The sh file containing the code and the directory I'm targeting are not in the same directory (if that makes sense).
I want to find the words myfoldername inside /vol/Homefs/ without going through any subdirectories. Doing grep -d skip, which I hoped would "skip" subdirectories and focus only directories, just gives me nope even though the folder/file/word I'm testing it on does exist.
Edit: I forgot to mention that I would also like mynamefolder to be a variable that I can write in putty, something like
./file spaing and spaing being the replacement for myfoldername.
I'm not sure if I did good enough explaining, let me know!

You just want
if [ -e /vol/Homefs/"$1" ]; then
echo yup
else
echo nope
fi
The [ command, with the -e operator, tests if the named file entry exists.
vim is not involved, and grep is not needed.

If you're insisting on using grep, you should know grep doesn't work on directories. You can convert the directory listing to a string.
echo /vol/Homefs/* | grep mynamefolder

Related

The `ls` command is interpreting my directory with spaces as multiple directories [duplicate]

This question already has answers here:
Why does shell ignore quoting characters in arguments passed to it through variables? [duplicate]
(3 answers)
Closed 3 years ago.
I'm trying to pass a dynamic command that executes ls as a string that lists the files of a directory that contains spaces. However, my ls command always interprets my one directory containing spaces as multiple directories no matter what I do.
Consider the following simplified version of my shell script:
#!/bin/sh
export WORK_DIR="/Users/jthoms/Dropbox (My Company)/backup-jthoms/Work"
echo "WORK_DIR=$WORK_DIR"
export LS_CMD="ls -A \"$WORK_DIR/dependencies/apache-tomcat-8.0.45/logs\""
echo "LS_CMD=$LS_CMD"
if [ -n "$($LS_CMD)" ]
then
echo "### Removing all logs"
sudo rm "$WORK_DIR/dependencies/apache-tomcat-8.0.45/logs/*"
else
echo "### Not removing all logs"
fi
This script results in the following output:
WORK_DIR=/Users/jthoms/Dropbox (My Company)/backup-jthoms/Work
LS_CMD=ls -A "/Users/jthoms/Dropbox (My Company)/backup-jthoms/Work/dependencies/apache-tomcat-8.0.45/logs"
ls: "/Users/jthoms/Dropbox: No such file or directory
ls: (My: No such file or directory
ls: Company)/backup-jthoms/Work/dependencies/apache-tomcat-8.0.45/logs": No such file or directory
### Not removing all logs
How can I correctly escape my shell variables so that the ls command interprets my directory as a single directory containing spaces instead of multiple directories?
I recently changed this script which used to work fine for directories containing no spaces but now doesn't work for this new case. I'm working on Bash on MacOSX. I have tried various forms of escaping, various Google searches and searching for similar questions here on SO but to no avail. Please help.
Variables are for data. Functions are for code.
# There's no apparent need to export this shell variable.
WORK_DIR="/Users/jthoms/Dropbox (My Company)/backup-jthoms/Work"
echo "WORK_DIR=$WORK_DIR"
ls_cmd () {
ls -A "$1"/dependencies/apache-tomcat-8.0.45/logs
}
if [ -n "$(ls_cmd "$WORK_DIR")" ]; then
then
echo "### Removing all logs"
sudo rm "$WORK_DIR/dependencies/apache-tomcat-8.0.45/logs/"*
else
echo "### Not removing all logs"
fi
However, you don't need ls for this at all (and in general, you should avoid parsing the output of ls). For example,
find "$WORK_DIR/dependencies/apache-tomcat-8.0.45/logs/" -type f -exec rm -rf {} +
You could use
# ...
if [ -n "$(eval "$LS_CMD")" ]
# ...
See http://mywiki.wooledge.org/BashFAQ/050
Or even
# ...
if [ -n "$(bash -c "$LS_CMD")" ]
# ...
But you are probably better off using a dedicated function and/or even something more specific to your problem (using find instead of ls is usually a good idea in these cases, see some examples in the answers for this question).
Use arrays, not strings, to store commands:
ls_cmd=(ls -A "$WORK_DIR/dependencies/apache-tomcat-8.0.45/logs")
echo "ls_cmd=${ls_cmd[*]}"
if [ -n "$("${ls_cmd[#]}")" ]; then …
(The syntax highlighting on the last line is incorrect and misleading: we’re not unquoting ${ls_cmd[#]}; in reality, we are using nested quotes via a subshell here.)
That way, word splitting won’t interfere with your command.
Note that you can’t export arrays. But you don’t need to, in your code.
As others have noted, it’s often a better idea to use functions here. More importantly, you shouldn’t parse the output of ls. There are always better alternatives.

if folders exist - by using wildcard [duplicate]

This question already has answers here:
Test whether a glob has any matches in Bash
(22 answers)
Closed 4 years ago.
or "How to handle prefixed folder names?"
Inside a folder I have two (or more) foo_* folders
foo_0
foo_1
What I'm trying to achieve is to
perform an action if there's 1 or more foo_* folders
Use a wildcard *
Currently I'm doing it this way (going directly to check if directory foo_0 exists):
prefix=foo_
if [ -d "./${prefix}0/" ]; then
printf "foo_0 folder found!"
# delete all foo_* folders
fi
Having directories 0-to-N so the above works, but i'm not sure I'll always have a foo_0 folder...
I'd like to do use a wildcard:
prefix=foo_
if [ -d "./${prefix}*/" ]; then # By using wildcard...
printf "One or more foo_* folders found!" # this never prints
# delete all foo_* folders
fi
I've read that a wildcard * inside quotes loses its powers, but placing it outside quotes throws :
if [ -d "./${prefix}"* ] <<< ERROR: binary operator expected
Or is it possible to use some sort of regex like? ./foo_\d+ ?
The only solution I don't (arguably) like, is by using set
set -- foo_*
if [ -d $1 ]; then
printf "foo_* found!"
fi
but it wipes program arguments.
Is there any other nice solution to this I'm missing?
I think a nice solution for pretty much all such cases is to use ls in the test, in that it often works quite simply:
if [ -n "$(ls -d foo_*)" ]; then ... If you want to do more regexp-like matching you can shopt -s extglob and then match with ls foo_+([0-9]).
There's also an all-bash solution using several shell options, but it's not as easy to remember, so I'll leave that to another poster ;-)
EDIT: As #PesaThe pointed out, using ls foo_* would fail if there's only one empty matching directory, as just the empty contents of that directory would get listed and ls foo_* would not only match directories, so it's preferable to use -d.

Getting Bash to parse variables from file input [duplicate]

This question already has answers here:
Forcing bash to expand variables in a string loaded from a file
(13 answers)
Closed 7 years ago.
Let's say I have a file called path.txt containing the text $HOME/filedump/ on a single line. How can I then read the contents of path.txt into a variable, while having Bash parse said content?
Here's an example of what I'm trying to do:
#!/bin/bash
targetfile="path.txt"
target=$( [[ -f $targetfile ]] && echo $( < $targetfile ) || echo "Not set" )
echo $target
Desired output: /home/joe/filedump/
Actual output: $HOME/filedump/
I've tried using cat in place of <, wrapping it in quotes, and more. Nothing seems to get me anywhere.
I'm sure I'm missing something obvious, and there's probably a simple builtin command. All I can find on Google is pages about reading variables from ini/config files or splitting one string into multiple variables.
If you want to evaluate the contents of path.txt and assign that to target, then use:
target=$(eval echo $(<path.txt))
for example:
$ target=$(eval echo $(<path.txt)); echo "$target"
/home/david/filedump/
This might not necessarily suit your needs (depending on the context of the code you provided), but the following worked for me:
targetfile="path.txt"
target=$(cat $targetfile)
echo $target
Here's a safer alternative than eval. In general, you should not be using configuration files that require bash to evaluate their contents; that just opens a security risk in your script. Instead, detect if there is something that requires evaluation, and handle it explicitly. For example,
IFS= read -r path < path.txt
if [[ $path =~ '$HOME' ]]; then
target=$HOME/${path#\$HOME}
# more generally, target=${path/\$HOME/$HOME}, but
# when does $HOME ever appear in the *middle* of a path?
else
target=$path
fi
This requires you to know ahead of time what variables might appear in path.txt, but that's a good thing. You should not be evaluating unknown code.
Note that you can use any placeholder instead of a variable in this case; %h/filedump can be detected and processed just as easily as $HOME/filedump, without the presumption that the contents can or should be evaluated as shell code.

Batch editing files 'stuck at weird place'

I'm trying to learn how to batch edit files and extract information from them. I've begun with trying to create some trial files and editing their names. I tried to search but couldn't find the problem I'm in anywhere.
If it's already answered, I'd be happy to be directed to that link.
So, I wrote the following code:
#!/bin/bash
mkdir -p ./trialscript
echo $1
i=1
while [ $i -le $1 ]
do
touch ./trialscript/testfile$i.dat
i=$(($i+1))
done
for f in ./trialscript/*.dat
do
echo $f
mv "$f" "$fhello.dat"
done
This doesn't seem to work, and I think it's because the echo output is like:
4
./trialscript/testfile1.dat
./trialscript/testfile2.dat
./trialscript/testfile3.dat
./trialscript/testfile4.dat
I just need the filename in the 'f' and not the complete path and then just rename it.
Can someone suggest what is wrong in my code, and what's correct way to do what I'm doing.
If you want to move the file, you have to use the path, too, otherwise mv wouldn't be able to find it.
The target specification for the mv command is more problematic, though. You're using
"$fhello.dat"
which, in fact, means "content of the $fhello variable plus the string .dat". How should the poor shell know where the seam is? Use
"${f}hello.dat"
to disambiguate.
Also, to extract parts of strings, see Parameter expansion in man bash. You can use ${f%/*} to only get the path, or ${f##*/} to only get the filename.

for loop on files that don't exist [duplicate]

This question already has answers here:
How to skip the for loop when there are no matching files?
(2 answers)
Closed 3 years ago.
I want to process a set of files (*.ui) in the current directory. The following script works as expected if some *.ui files are found. But if no .ui file exist the current directory, the for loop is entered all the same. Why is that ?
for f in *.ui
do
echo "Processing $f..."
done
It prints :
Processing *.ui...
Use:
shopt -s nullglob
From man bash:
nullglob
If set, bash allows patterns which match no files (see Pathname Expansion
above) to expand to a null string, rather than themselves.
You already have the how, the 'why' is that bash will first try to match *.ui to files, but if that doesn't work (it gets no results) it will assume you meant the string "*.ui".
for f in "*.ui"
do
echo "Processing $f..."
done
wil indeed print "Processing *.ui".

Resources