I am trying to redirect a specific file to the "find" command in bash shell by using the below command.
ls sample.txt | find -name "*ple*"
I would like to search the sub string ple in the filename sample.txt which I have passed, but the above command is checking for the match from all the files in the directory. It is not searching for the match in the specific file which I have passed using pipe.
You need to use the grep command not find, which crawls the file system.
If you are looking for the sub string "ple" in the filename "sample.txt", then this code would do what you need:
mkdir -p /tmp/holder-file/ && cp /directory/sample.txt /tmp/holder-file/ && ls /tmp/holder-file/ | grep -e "ple"
Not super elegant but works great.
Edit: Thanks to Benjamin W. who came with a more elegant solution:
grep -e 'ple' <<< '/directory/sample.txt'
Related
I am trying to create a bash command that uses grep to search arguments in a specified directory. How would I do this. At the moment it only searches for the current directory. I have tried the following but it doesn't work:
ls $directoryName -l | grep "$1"
I'm sure there's a better way do do this but
ls -lah $directoryName > /usr/tmp/test
grep $1 /usr/tmp/test
rm /usr/tmp/test
Edit: You might have better luck using find though.
find $directoryName -name $1
Working example:
grep -e "Exec" /usr/share/applications/*
The following example will search recursively (including in sub-folders and hidden files) for an argument or pattern in a literal way (it will search for '$1' literally, not allowing substitution):
grep -re '$1' /folder/folder
Now, if you want to search for the value of the argument, then the code below would allow for substitution and do that:
grep -re "$1" /folder/folder
I'm starting in the shell script.I'm need to make the checksum of a lot of files, so I thought to automate the process using an shell script.
I make to scripts: the first script uses an recursive ls command with an egrep -v that receive as parameter the path of file inputed by me, these command is saved in a ambient variable that converts the output in a string, follow by a loop(for) that cut the output's string in lines and pass these lines as a parameter when calling the second script; The second script take this parameter and pass they as parameter to hashdeep command,wich in turn is saved in another ambient variable that, as in previous script,convert the output's command in a string and cut they using IFS,lastly I'm take the line of interest and put then in a text file.
The output is:
/home/douglas/Trampo/shell_scripts/2016-10-27-001757.jpg: No such file
or directory
----Checksum FILE: 2016-10-27-001757.jpg
----Checksum HASH:
the issue is: I sets as parameter the directory ~/Pictures but in the output error they return another directory,/home/douglas/Trampo/shell_scripts/(the own directory), in this case, the file 2016-10-27-001757.jpg is in the ~/Pictures directory,why the script is going in its own directory?
First script:
#/bin/bash
arquivos=$(ls -R $1 | egrep -v '^d')
for linha in $arquivos
do
bash ./task2.sh $linha
done
second script:
#/bin/bash
checksum=$(hashdeep $1)
concatenado=''
for i in $checksum
do
concatenado+=$i
done
IFS=',' read -ra ADDR <<< "$concatenado"
echo
echo '----Checksum FILE:' $1
echo '----Checksum HASH:' ${ADDR[4]}
echo
echo ${ADDR[4]} >> ~/Trampo/shell_scripts/txt2.txt
I think that's...sorry about the English grammatic errors.
I hope that the question has become clear.
Thanks ins advanced!
There are several wrong in the first script alone.
When running ls in recursive mode using -R, the output is listed per directory and each file is listed relative to their parent instead of full pathname.
ls -R doesn't list the directory in long format as implied by | grep -v ^d where it seems you are looking for files (non directories).
In your specific case, the missing file 2016-10-27-001757.jpg is in a subdirectory but you lost the location by using ls -R.
Do not parse the output of ls. Use find and you won't have the same issue.
First script can be replaced by a single line.
Try this:
#!/bin/bash
find $1 -type f -exec ./task2.sh "{}" \;
Or if you prefer using xargs, try this:
#!/bin/bash
find $1 -type f -print0 | xargs -0 -n1 -I{} ./task2.sh "{}"
Note: enclosing {} in quotes ensures that task2.sh receives a complete filename even if it contains spaces.
In task2.sh the parameter $1 should also be quoted "$1".
If task2.sh is executable, you are all set. If not, add bash in the line so it reads as:
find $1 -type f -exec bash ./task2.sh "{}" \;
task2.sh, though not posted in the original question, is not executable. It has a missing execute permission.
Add execute permission to it by running chmod like:
chmod a+x task2.sh
Goodluck.
I have a script that creates a file list of directories available in another path.
Now, I would like to do some tasks only if the Directory is of the format "YYYY_MM_DD_HH" in this file list.
My file list has following entries:
2014_04_21_01
asdf
2012_01_19_10
2010_01
Now I would like to move the directories with names as YYYY_MM_DD_HH to another path. I.e., only 2014_04_21_01 & 2012_01_19_10 MUST be MOVED.
Please advise.
Use bash regex pattern matching:
for dir in $list
do if [[ "$dir" =~ ^[0-9]{4}_[0-9]{2}_[0-9]{2}_[0-9]{2}$ ]]
then mv "$dir" newdir/
fi
done
Assuming you have a GNU version of sed on your computer, you could use it to easily parse your directory names and execute a command.
Say we have following input file:
2014_04_21_01
asdf
2012_01_19_10
2010_01
2012_01_19_10_09
62012_01_19_10
You can search for your regex with sed and replace it with a mv command as follows:
sed 's/^[0-9]\{4\}\(_[0-9]\{2\}\)\{3\}$/mv "&" "other_dir"/' file_list
will output:
mv "2014_04_21_01" "other_dir" # We want to run this
asdf
mv "2012_01_19_10" "other_dir" # and this
2010_01
2012_01_19_10_09
62012_01_19_10
Now if you add the (GNU sed) e option at the end of sed substitution (and -n option before sed script to ensure only successul substitutions are executed), the generated command will be piped into your shell:
sed -n 's/^[0-9]\{4\}\(_[0-9]\{2\}\)\{3\}$/mv "&" "other_dir"/e' file_list
# ^^ ^
I would recommand to run it first without the e option so as to check that mv commands will be properly formatted.
Why to make separate file for file list. Just go in that directory execute following command. I have taken the destination directory as /home/newdir/
ls | grep [0-9][0-9][0-9][0-9]_[01][0-9]_[0123][0-9]_[012][0-9] | awk '{print $0" /home/newdir/"}' | xargs mv
Be Careful while working with dates. As you have mentioned that file name is in format YYYY_MM_DD_HH then we have restrictions on MM,DD and HH. If we talk about restrictions then we know how a calendar is constructed. So 9999_99_99_99 is invalid file name. It is not satisfying YYYY_MM_DD_HH.
We have to build script for restrictions or I can say whole calendar. Still working on it.
Example:
perl -nle 'system("mv $_ dir/year$1") if /^(\d{4})_\d\d_\d\d_\d\d/$' flist
would extract the year and rename dir 2014_04_21_01 to dir/year2014
This single find command with -regex option should take care of this:
cd /base/path/of/these/dirs
find . -type d -regextype posix-egrep -regex '.*/[0-9]{4}_[0-9]{2}_[0-9]{2}_[0-9]{2}$' \
-exec mv '{}' /dest/dir/ \;
I am trying to make the following loop to work. I can get the output which I want if I run it on the commandline with one file as input but it's not extracting with the for loop. Any help?
#!/bin/bash
FILES=$(/home/dd/ff/*.txt)
for file in $FILES
do
grep -r -i -A4 'Compliance Calculation' "$file"
done
Try it this way:
#!/bin/bash
FILES=(/home/dd/ff/*.txt)
for file in "${FILES[#]}"; do
grep -r -i -A4 'Compliance Calculation' "$file"
done
See my video on bash variable expansion for an explanation.
The minimal fix is to get rid of FILES altogether.
for file in /home/dd/ff/*.txt
There are a number of problems with this script:
Avoid using bash specific features. Instead use /bin/sh not /bin/bash. If you must use bash then /usr/bin/env bash. Directly invoking /bin/bash is not portable.
$() runs a command, it does not expand a glob. It will attempt to run whatever file earliest in the expansion as en executable.
you should be using the find |xargs pattern instead of a for loop. Try find /home/dd/ff/ -maxdepth 1-name '*.txt' | xargs grep -hr -i -A4 'Compliance Calculation'
If you must use a loop you can just expand the glob directly in the loop:
for file in /home/dd/ff/*.txt
Not 100% on how to go about this.
Basically my app creates two file outputs.
file
file.ext
when searching through the directory it always returns both files. I wish only to work with
file
how can I do a grep for just the file? Or is grep even the correct command?
!grep *.ext
pseudo code ^
Why not just:
grep foobar file
or if you want to search your directory
find . -name 'file' | xargs -r grep boofar
You can try using grep 'file$', i.e. select lines ending with file and nothing after that. Alternatively you can use grep -v to invert results.
# (echo "file"; echo "file.txt") | grep file$
file
# (echo "file"; echo "file.txt") | grep -v .ext
file
I have no idea what you are trying to do with grep, but if you want to check whether a file exists, you do simply this:
if [ -r /some/path/file ]; then
echo "File is readable."
fi
Grep has a -v flag that inverses the meaning (looks for lines that do not contain the pattern):
ls /your/directory | grep -v '\.ext$'
This will exclude every filename ending with .ext.
Try
grep -v option
-v do the inverse match
For details do $ man grep
To generate a list of all files that do not match a pattern, use -not in find. So you want:
find . -not -name '*.ext'