Checking if PWD contains directory name - bash

I want to find out if the PWD contains a certain directory name in it, it should be able to test it being anywhere in the output.
For example I have structure paths like public/bower_components/name/ and also have paths which are just public.
I want to test because the contents of the folder name move into the public folder and the bower_components folder is removed.
Thanks

You can use BASH regex for this:
[[ "$PWD" =~ somedir ]] && echo "PWD has somedir"
OR using shell glob:
[[ "$PWD" == *somedir* ]] && echo "PWD has somedir"

You can use case:
case "$PWD" in
*/somedir/*) …;;
*) ;; # default case
esac
You can use [[:
if [[ "$PWD" = */somedir/* ]]; then …
You can use regex:
if [[ "$PWD" =~ somedir ]]; then …
and there are more ways, to boot!

Related

BASH - Safe check before running "rm -fr $FOLDER"

I have a script which really needs an rm -fr on a specific folder
I'd like to make this as safe as possible. I started this script below but I was wondering if there's anything else I missed.
folder=""
if [[ ! -d "$folder" ]]; then
echo "Error: is not a folder"
elif [[ "$folder" == "/" ]]; then
echo "Error: folder points to root"
elif [[ "$folder" == "../"* ]]; then
echo "Error: folder start with ../"
elif [[ "$folder" == *"/.."* ]]; then
echo "Error: folder contains /.."
elif [[ "$folder" == *"/*"* ]]; then
echo "Error: folder ends with /*"
else
rm -fr "$folder"
fi
Update: added the check for "/"
If you want to be as safe as possible, you could perhaps...
Make sure any globbing is done first :
shopt -s nullglob
declare -a folders=(folder_or_glob)
Iterate over each element of the array, one at a time, and operate on the canonical path.
for f in "${folders[#]-}"
do
[[ $f ]] || continue
candidate="$(realpath -e -s "$f")" || continue
ok_to_delete "$candidate" || continue
rm -rf "$candidate"
done
Use function ok_to_delete to test :
ok_to_delete()
{
[[ -d $1 ]] || continue # Is directory
[[ $1 != / ]] || continue # Not root
[[ "${1%/*}" ]] || continue # At least two levels deep
(... add any test you want ...)
}
There is a bit of redundancy here (e.g. not root + 2 levels deep), but this is just to give you ideas.
Instead of checking the path name of folder, I would rather to check the contents in that folder, file's size/user/timestamp/keywork/extension, etc, or whatever you care most about. This is a more safe method for you, this's just my two cents.

Why am I getting unbound variable i bash?

I have the following script which works for the most part till it hits a specific line:
#!/usr/bin/env bash
set -eux
# Go Home.
cd /vagrant/Freya/
CLEANED_ASSETS=false
## Clean up time!
## Remove all vendor and composer.lock folders - just because.
for f in *; do
if [[ -d $f ]]; then
if [[ $f != ".git" ]] && [[ $f != "bin" ]] && [[ $f != "docs" ]]; then
if [[ $f == "Loader" ]] && [[ $CLEANED_ASSETS == false ]]; then
cd "$f/"
if [[ -d "Assets" ]]; then
cd Assets/
rm -rf vendor composer.lock docs
let $CLEANED_ASSETS=true
cd ../../
fi
fi
cd "$f/"
rm -rf vendor composer.lock docs
cd ../
fi
fi
done
The issue is when it hits let $CLEANED_ASSETS=true I am not sure the proper way to set this variable to true, so it never enters this loop again. I keep getting:
+ let false=true
bin/clean-directories: line 21: true: unbound variable
CLEANED_ASSETS=true
No let, no $.
In particular, the let causes true to be treated as a variable name (searched for a numeric value), and referring to variable names that don't exist gets you flagged by set -u.

Filename prefix test always failing in bash

i have a question about using shell to read files. That is to say, i have a folder like this:
folder
new_file.log (this is a file)
new_file2.log (this is a file)
new_file3.log (this is a file)
new (this is a subfolder)
new_file_from_subfolder.log
new_file2_from_subfolder.log
new_file3_from_subfolder.log
what i want is to read all the content from (direct) files, not files from the subfolder. In the above case, i need new_file.log to new_file3.log.
I know there is a simple way:
$ cat new*.log
but i also like to write a bash script:
for file in $(ls -a)
do
if [[ "$file" != "." && "$file" != ".." ]]; then
if [[ -f "$file" && "$file" == "^new" ]]; then **here is the problem**
[do something]
fi
fi
done
my problem is labeled as above. the bash code seems doesnot like
"$file" == ^new
if i run the bash, it basically does nothing, which means that files fail to meet the condition.
anything wrong?
[[ $foo = $bar ]] is a glob expression, not a regex; ^ has no special meaning there.
You probably want either the glob expression
[[ $file = new* ]]
or the regex
[[ $file =~ ^new ]]
Of course, in a real-world scenario, you'd just iterate only over the names that match your prefix:
for file in new*; do
: something with "$file"
done
...or, recursively (using FD 3 so you can still interact with the user in this code):
while IFS= read -u 3 -r -d '' file; do
: something with "$file"
done 3< <(find . -type f -name 'new*' -print0)
You're headed down the wrong track. Here's how to iterate over all regular files starting with new:
for file in new*
do
if [[ -f $file ]]
then
dosomething "$file"
fi
done

Check If Shell Script $1 Is Absolute Or Relative Path [duplicate]

This question already has answers here:
Determine if relative or absolute path in shell program
(4 answers)
Closed 6 years ago.
As the title says, I am trying to determine if my bash script receives a full path or a relative file to a directory as a parameter.
For some reasons the following doesn't seem to work for me:
#!/bin/bash
DIR=$1
if [ "$DIR" = /* ]
then
echo "absolute"
else
echo "relative"
fi
When I run my script with either a full path or absolute path it says:
./script.sh: line 5: [: too many arguments
relative
For some reasons I can't seem to figure this bug. Any ideas?
[ ... ] doesn't do pattern matching. /* is being expanded to the contents of /, so effectively you have
if [ "$DIR" = /bin /boot /dev /etc /home /lib /media ... /usr /var ]
or something similar. Use [[ ... ]] instead.
if [[ "$DIR" = /* ]]; then
For POSIX compliance, or if you just don't have a [[ that does pattern matching, use a case statement.
case $DIR in
/*) echo "absolute path" ;;
*) echo "something else" ;;
esac
Just test on the first character:
if [ "${DIR:0:1}" = "/" ]
One more case is paths started from ~ (tilde). ~user/some.file or ~/some.file are some kind of absolute paths.
if [[ "${dir:0:1}" == / || "${dir:0:2}" == ~[/a-z] ]]
then
echo "Absolute"
else
echo "Relative"
fi
ShellCheck automatically points out that "[ .. ] can't match globs. Use [[ .. ]] or grep."
In other words, use
if [[ "$DIR" = /* ]]
This is because [ is a regular command, so /* is expanded by the shell beforehand, turning it into
[ "$DIR" = /bin /dev /etc /home .. ]
[[ is handled specially by the shell, and doesn't have this problem.
Writing tests is fun:
#!/bin/bash
declare -a MY_ARRAY # declare an indexed array variable
MY_ARRAY[0]="/a/b"
MY_ARRAY[1]="a/b"
MY_ARRAY[2]="/a a/b"
MY_ARRAY[3]="a a/b"
MY_ARRAY[4]="/*"
# Note that
# 1) quotes around MY_PATH in the [[ ]] test are not needed
# 2) the expanded array expression "${MY_ARRAY[#]}" does need the quotes
# otherwise paths containing spaces will fall apart into separate elements.
# Nasty, nasty syntax.
echo "Test with == /* (correct, regular expression match according to the Pattern Matching section of the bash man page)"
for MY_PATH in "${MY_ARRAY[#]}"; do
# This works
if [[ $MY_PATH == /* ]]; then
echo "'$MY_PATH' is absolute"
else
echo "'$MY_PATH' is relative"
fi
done
echo "Test with == \"/*\" (wrong, becomes string comparison)"
for MY_PATH in "${MY_ARRAY[#]}"; do
# This does not work at all; comparison with the string "/*" occurs!
if [[ $MY_PATH == "/*" ]]; then
echo "'$MY_PATH' is absolute"
else
echo "'$MY_PATH' is relative"
fi
done
echo "Test with = /* (also correct, same as ==)"
for MY_PATH in "${MY_ARRAY[#]}"; do
if [[ $MY_PATH = /* ]]; then
echo "'$MY_PATH' is absolute"
else
echo "'$MY_PATH' is relative"
fi
done
echo "Test with =~ /.* (pattern matching according to the regex(7) page)"
# Again, do not quote the regex; '^/' would do too
for MY_PATH in "${MY_ARRAY[#]}"; do
if [[ $MY_PATH =~ ^/[:print:]* ]]; then
echo "'$MY_PATH' is absolute"
else
echo "'$MY_PATH' is relative"
fi
done

sh: Test for existence of files

How does one test for the existence of files in a directory using bash?
if ... ; then
echo 'Found some!'
fi
To be clear, I don't want to test for the existence of a specific file. I would like to test if a specific directory contains any files.
I went with:
(
shopt -s dotglob nullglob
existing_files=( ./* )
if [[ ${#existing_files[#]} -gt 0 ]] ; then
some_command "${existing_files[#]}"
fi
)
Using the array avoids race conditions from reading the file list twice.
From the man page:
-f file
True if file exists and is a regular file.
So:
if [ -f someFileName ]; then echo 'Found some!'; fi
Edit: I see you already got the answer, but for completeness, you can use the info in Checking from shell script if a directory contains files - and lose the dotglob option if you want hidden files ignored.
I typically just use a cheap ls -A to see if there's a response.
Pseudo-maybe-correct-syntax-example-ahoy:
if [[ $(ls -A my_directory_path_variable ) ]] then....
edit, this will work:
myDir=(./*) if [ ${#myDir[#]} -gt 1 ]; then echo "there's something down here"; fi
You can use ls in an if statement thus:
if [[ "$(ls -a1 | egrep -v '^\.$|^\.\.$')" = "" ]] ; then echo empty ; fi
or, thanks to ikegami,
if [[ "$(ls -A)" = "" ]] ; then echo empty ; fi
or, even shorter:
if [[ -z "$(ls -A)" ]] ; then echo empty ; fi
These basically list all files in the current directory (including hidden ones) that are neither . nor ...
If that list is empty, then the directory is empty.
If you want to discount hidden files, you can simplify it to:
if [[ "$(ls)" = "" ]] ; then echo empty ; fi
A bash-only solution (no invoking external programs like ls or egrep) can be done as follows:
emp=Y; for i in *; do if [[ $i != "*" ]]; then emp=N; break; fi; done; echo $emp
It's not the prettiest code in the world, it simply sets emp to Y and then, for every real file, sets it to N and breaks from the for loop for efficiency. If there were zero files, it stays as Y.
Try this
if [ -f /tmp/foo.txt ]
then
echo the file exists
fi
ref: http://tldp.org/LDP/abs/html/fto.html
you may also want to check this out: http://tldp.org/LDP/abs/html/fto.html
How about this for whether directory is empty or not
$ find "/tmp" -type f -exec echo Found file {} \;
#!/bin/bash
if [ -e $1 ]; then
echo "File exists"
else
echo "Files does not exist"
fi
I don't have a good pure sh/bash solution, but it's easy to do in Perl:
#!/usr/bin/perl
use strict;
use warnings;
die "Usage: $0 dir\n" if scalar #ARGV != 1 or not -d $ARGV[0];
opendir my $DIR, $ARGV[0] or die "$ARGV[0]: $!\n";
my #files = readdir $DIR;
closedir $DIR;
if (scalar #files == 2) { # . and ..
exit 0;
}
else {
exit 1;
}
Call it something like emptydir and put it somewhere in your $PATH, then:
if emptydir dir ; then
echo "dir is empty"
else
echo "dir is not empty"
fi
It dies with an error message if you give it no arguments, two or more arguments, or an argument that isn't a directory; it's easy enough to change if you prefer different behavior.
# tested on Linux BASH
directory=$1
if test $(stat -c %h $directory) -gt 2;
then
echo "not empty"
else
echo "empty"
fi
For fun:
if ( shopt -s nullglob ; perl -e'exit !#ARGV' ./* ) ; then
echo 'Found some!'
fi
(Doesn't check for hidden files)

Resources