Using the pattern match !("file1") does not work within a bash script but will work on the command line.
For example:
ls !("file1"|"file2")
This will list all files in directory except file1 and file2.
When that line is executed in a script this error is displayed:
./script.sh: line 1: syntax error near unexpected token `('
./script.sh: line 1: ` ls !("file1"|"file2") '
Regardless what is used rm -v !("file1"). The same error takes place. What is going on here why does this not work in a script?
The extended glob syntax you are trying to use is turned off by default; you have to enable it separately in each script where you want to use it.
shopt -s extglob
Scripts should not use ls though I imagine you were using it merely as a placeholder here.
Globbing doesn't work that way unless you enable extglob shell opt. Instead, I recommend using find:
find . -maxdepth 1 -not -name '<NAME>' -or -name '<NAME>' -delete
before running this command with -delete ensure the output is correct
Method with default settings and no external procs:
for f in *; do [[ $f =~ ^file[12]$ ]] || echo "$f"; done
Related
I have written the below command using a shell script:
/usr/bin/find ${FilePath[$i]} -name ${FileName[$i]}* -type f -mtime +${DaysNo[$i]} | grep ${FilePath[$i]}$tempfile > tempFilesList
It looks good when I execute this script directly, but gives me below error when I try to execute it from another shell script.
ERROR : /usr/bin/find: bad option resultmgr.log_2019-11-07
/usr/bin/find: [-H | -L] path-list predicate-list
It's likely that ${FileName[$i]}* is being expanded to multiple file names which would give you something like -name file1 file2 in your command.
That could happen if, for example, files matching that mask existed in your current working directory for the case where you run it from another script, but not when you're running it from the command line. Some shells will expand if possible but leave alone if not, as per the following transcript:
~> echo testprog*
testprog testprog.c
~> echo nosuchfile*
nosuchfile*
~> _
That file2 would then be considered a control argument to find and therefore invalid.
You can check this by simply echoing out the command before running it:
echo Will run: /usr/bin/find ${FilePath[$i]} -name ${FileName[$i]}* -type f -mtime +${DaysNo[$i]} ...
and seeing what it outputs.
Within a bash script, I'm trying to pull all files with an extension '.jstd' into an array, loop over that array and carry out some action.
My script is failing to copy the path of each script into the array.
I have the following script.
#!/bin/bash
IFS=$'\n'
file_list=($(find '/var/www' -type f -name "*.jstd"))
for i in "${file_list[#]}"; do
echo "$i"
done
echo $file_list
unset IFS
The line file_list=($(find '/var/www' -type f -name "*.jstd")) works fine in the terminal, but fails in the script with:
Syntax error: "(" unexpected
I've googled, but failed. All ideas gratefully received.
edit: In case it helps in reproduction or clues, I'm running Ubuntu 12.04, with GNU bash, version 4.2.25(1)-release (i686-pc-linux-gnu)
This is precisely the error you would get if your shell were /bin/sh on Ubuntu, not bash:
$ dash -c 'foo=( bar )'
dash: 1: Syntax error: "(" unexpected
If you're running your script with sh yourscript -- don't. You must invoke bash scripts with bash.
That being given, though -- the better way to read a file list from find would be:
file_list=( )
while IFS= read -r -d '' filename; do
file_list+=( "$filename" )
done < <(find '/var/www' -type f -name "*.jstd" -print0)
...the above approach working correctly with filenames containing spaces, newlines, glob characters, and other corner cases.
Hi I am trying to work on the files which have an extension like this .p8,.p16,*.p32.
./pack_vectors $#
for var in "$#"
do
if [ -f $var ];
then
pack_list="${var/.dat/.p}"
echo $pack_list
# below line doesn't work
for f in $pack_list+([:digit:]);do
what i am getting out is:
./wrapper.sh: line 10: syntax error near unexpected token `('
./wrapper.sh: line 10: `for f in $pack_list+([:digit:]);do'
Why?
An easier way to do is to use find:
find . -name "*.p8" -o -name "*.p16" -o -name "*.p32
The -o is the equivalent of boolean OR
To assign it to a variable, do this:
myvar=$(find . -name "*.p8" -o -name "*.p16" -o -name "*.p32")
Couple of possible issues. You have to have extglob on if you want to do the regex-style file matching you're trying to do in bash. So put
shopt -s extglob
before your for loop. You're also looking for [[:digit:]] if you want to use the posix character class in bash. So putting that together, try
shopt -s extglob
for f in ".p"+([[:digit:]]); do
Not quote sure what "$pack_list" is so replaced it with ".p" above.
It is working now, a bit of changing of BroSlow answer:
for f in "$pack_list+([[:digit:]])"; do
I'm working on a unix script that has 2 input parameters - path and size.
The script will check all the files in the given path with the given size and deletes them. If the delete operation fails, the respective file-name is recorded into a file. For any other case, the file is rendered without any action.
I have written a short code (don't know whether it works).
find $path -type f -size +${byte_size}c -print | xargs -I {}
if $?=1;
then
rm -rf {};
else
echo {} >> Error_log_list.txt'
where
$path is the path where we search for the files.
size is the input size.
Error_log_list.txt is the file where we send the non-deletable filenames.
Can anyone please help me verify whether it is correct?
GNU find has a -delete option for this exact use case. More information (and a number of different approaches) in the find documentation.
find $path -type f -size +${byte_size}c -delete
Executing your script results in the following syntax error:
./test.sh: line 9: unexpected EOF while looking for matching `''
./test.sh: line 11: syntax error: unexpected end of file
Moreover the condition of the if statement seems not correct.
If I am not wrong it tests the return code of the "rm" command before to
execute the command.
I am not familiar with xargs and I tried to rewrite your script
using a while loop construct. Here my script
#!/bin/bash
path=$1
byte_size=$2
find $path -type f -size +${byte_size}c -print | while read file_name
do
rm -f $file_name
if [ ! $? -eq 0 ]; then
echo $file_name >> Error_log_list.txt
fi
done
I tested it trying to delete files without the right permission and it works.
I wrote a script, please check this functionality
a=`find . -type f -size +{$size}c -print`
#check if $a is empty
if [ -z "$a" ]
then
echo $a > error_log.txt
#if a is not empty then remove them
else
rm $a
fi
Let me explain what we are doing here.
First assigning the file_names in current directory (which satisfy
size requirement) to a variable 'a'
Checking if that variable is
empty (empty means there is no file with your size requirement) if a
has some values then delete them
I have written an executable in c++, which is designed to take input from a file, and output to stdout (which I would like to redirect to a single file). The issue is, I want to run this on all of the files in a folder, and the find command that I am using is not cooperating. The command that I am using is:
find -name files/* -exec ./stagger < {} \;
From looking at examples, it is my understanding that {} replaces the file name. However, I am getting the error:
-bash: {}: No such file or directory
I am assuming that once this is ironed out, in order to get all of the results into one file, I could simply use the pattern Command >> outputfile.txt.
Thank you for any help, and let me know if the question can be clarified.
The problem that you are having is that redirection is processed before the find command. You can work around this by spawning another bash process in the -exec call:
find files/* -exec bash -c '/path/to/stagger < "$1"' -- {} \;
The < operator is interpreted as a redirect by the shell prior to running the command. The shell tries redirecting input from a file named {} to find's stdin, and an error occurs if the file doesn't exist.
The argument to -name is unquoted and contains a glob character. The shell applies pathname expansion and gives nonsensical arguments to find.
Filenames can't contain slashes. The argument to -name can't work even if it were quoted. If GNU find is available, -path can be used to specify a glob pattern files/*, but this doesn't mean "files in directories named files", for that you need -regex. Portable solutions are harder.
You need to specify one or more paths for find to start from.
Assuming what you really wanted was to have a shell perform the redirect, Here's a way with GNU find.
find . -type f -regex '.*foo/[^/]*$' -exec sh -c 'for x; do ./stagger <"$x"; done' -- {} +
This is probably the best portable way using find (-depth and -prune won't work for this):
find . -type d -name files -exec sh -c 'for x; do for y in "$x"/*; do [ -f "$y" ] && ./stagger <"$y"; done; done' -- {} +
If you're using Bash, this problem is a very good candidate for just using a globstar pattern instead of find.
#!/usr/bin/env bash
shopt -s extglob globstar nullglob
for x in **/files/*; do
[[ -f "$x" ]] && ./stagger <"$x"
done
Simply escape the less-than symbol, so that redirection is carried out by the find command rather than the shell it is running in:
find files/* -exec ./stagger \< {} \;