I'm struggling with my script. I want to find the full file path of a specific file, just like this example:
/path/folder_06may2017.zip/file_B.txt
I hope you can help me solving this problem. It will be really useful.
Directory and file samples
- Folder_01may2017.zip
+ file_A.txt
+ file_B.txt
- Folder_06may2017.zip
+ file_A.txt
+ file_B.txt
I have used this commands with no success at all:
1st attempt:
find "/path/folder/" -name "*06may2017*" -print -exec unzip -l {} \; | grep -i 'file_B'
1st Output:
182118 2017-05-06 11:20 file_B.txt
2nd attempt:
find "/path/folder/" -name "*06may2017*" -print -exec unzip -l {} \; | grep -i 'file_B'| awk '{ print $4 }' ${PWD}
2nd output:
awk: warning: command line argument '/path/folder' is a directory: skipped
3rd attempt:
find "/path/folder" -name "*06may2017*" -exec grep -l "file_B" /dev/null '{}' \;
3rd output
/path/folder/Folder_06may2017.zip
What about:
$ find "/path/folder" -name "*06may2017*" -exec unzip -l {} \; | awk '$1 ~ /Archive/{zipname = $2}; $4 ~ /file_B/ {printf "%s/%s\n", zipname, $4}'
Related
How can I make a valid constructed full file path for copy command?
I want to construct a valid file path that Will be read it by cp command as a path, where my lost file is located at, and Will be copied to another directory. The problem is that cp command doesnt recognize it as a path. I hope and appreciate, you can help me solving this problem.
*The file and directory exists.
*The full file path were constructed by concatenating $fname and $path.
*My lost file is named 12345678.xml
script
17 for((i=0;i<${#array[#]}-1;i+=2));do
18 path=$(find "/path/sourcefolder" -name "*zipfolder*" -exec grep -l "Mylostfile" {} + )
19 fname=$(find "/path/sourcefolder" -name "*zipfolder*" -exec unzip -l {} \; | grep "12345678.xml" | awk ' { printf $4 }')
20 fullpath="$path/$fname"
21 cp -t "/path/Desktop" "$fullpath"
22 done
Actual output
Test_v2.sh: line 20: /path/zipfolder/12345678.xml: No such file or directory
cp: cannot stat '': No such file or directory
Desired output
Copy the file in another directory
Additional comments #1:
I added set -xv an this is the result.
++ find /path/sourcefolder -name '*zipfolder*' -exec grep -l 12345678 '{}' +
+ path=/path/sourcefolder/zipfolder.zip
++ find /path/sourcefolder -name '*zipfolder*' -exec unzip -l '{}' ';'
++ grep 12345678
++ awk ' { printf $4 }'
+ fname=12345678.xml
+ set -xv
+ fullpath=
+ /path/sourcefolder/zipfolder.zip/12345678.xml
Test_v2.sh: line 21: /path/sourcefolder/zipfolder.zip/12345678.xml: No such file or directory
+ cp -t /path/Desktop ''
+ cp: cannot stat '': Not a directory
Additional comments #2:
The space between fullpath= and "$path/$fname" is eliminated as suggested and this is the following result.
++ find /path/sourcefolder -name '*zipfolder*' -exec grep -l 12345678 '{}' +
+ path=/path/sourcefolder/zipfolder.zip
++ find /path/sourcefolder -name '*zipfolder*' -exec unzip -l '{}' ';'
++ grep 12345678
++ awk ' { printf $4 }'
+ fname=12345678.xml
+ set -xv
+ fullpath=/path/sourcefolder/zipfolder.zip/12345678.xml
+ cp -t /path/Desktop ''
+ cp: cannot stat '/path/sourcefolder/zipfolder.zip/12345678.xml': Not a directory
It looks like what you want to do could be simply implemented as follows :
find "/path/sourcefolder" -name "*zipfolder*" -exec unzip -p {} '*12345678.xml' > "/path/Desktop/12345678.xml" \;
It extracts a file 12345678.xml from anywhere in the found zip to /path/Desktop/12345678.xml by using unzip's -p flag to print the content of the file to stdout and redirecting stdout to the target file.
I'm trying to move my LOG folders. Here is what I have so far.
cd archive
find .. -type d -name 'LOGS' | xargs -I '{}' mv {} `echo {} | awk -F/ 'NF > 1 { print $(NF - 1)"-LOGS"; }'`
Unfortunately --> echo {} | awk -F/ 'NF > 1 { print $(NF - 1)"-LOGS"; }' <-- evaluates immediately. So doesn't give me the file name that I would prefer.
mv ../app1/LOGS app1-LOGS
mv ../app2/LOGS app2-LOGS
Is there a way to do this in a single line?
Using xargs:
find .. -type d -name 'LOGS' |
xargs -I {} bash -c 'd="${1%/*}"; mv "$1" "${d##*/}-LOGS"' - {}
Or else you can do that like this using process substitution:
cd archive
while IFS= read -rd '' dir; do
d="${dir%/*}"
d="${d##*/}"
mv "$dir" "$d-LOGS"
done < <(find .. -type d -name 'LOGS' -print0)
I'm running the following command to get a directory listing:
find ./../ \
-type f -newer ./lastsearchstamp -path . -prune -name '*.txt' -o -name '*.log' \
| awk -F/ '{print $NF " - " $FILENAME}'
Is there some way I can format the output in a 2 column left indented layout so that the output looks legible?
The command above always adds a constant spacing between the filename and the path.
Expected output:
abc.txt /root/somefolder/someotherfolder/
helloworld.txt /root/folder/someotherfolder/
a.sh /root/folder/someotherfolder/scripts
I nice tool for this kind of thing is column -t. You just add the command on to the end of the pipeline:
find ... | awk -F/ '{print $NF " - " $FILENAME}' | column -t
I am new to shell programming. I want to move any executable file, any file starting with shebang(#!), and any file whose name ends with .sh from a directory to /tmp/backup and log the names of the files moved.
This is what I have done till now
Searching for files with #^
grep -ircl --exclude=*.{png,jpg,gif,html,jar} "^#" /home
Finding executables
find . -type f -perm +111 or find . -type f -perm -u+x
Now I am struggling how to club these two commands get a final output which I can pass to perform backup and remove from current directory
Thanks
Use the xargs command
"find command" | xargs "grep command"
You could put everything in a file, sort it, then process it with Awk:
# Select all files to move
grep -ircl --exclude=*.{png,jpg,gif,html,jar} '^#\!' /home > list.txt
find /home -type f \( -perm -u+x -o -name "*.sh" \) -print >> list.txt
# Feed them to Awk that will log and move the file
sort list.txt | uniq | awk -v LOGFILE="mylog.txt" '
{ print "Moving " $0 >> LOGFILE
"mv -v --backup \"" $0 "\" /tmp/backup" | getline
print >> LOGFILE }'
EDIT: you can make a formal script out of this skeleton, by adding some variables and some additional checks:
#!/bin/bash
LIST="$( mktemp || exit 1 )"
LOG="/tmp/mylog.txt"
SOURCE="/home"
TARGET="/tmp/backup"
mkdir -p "${TARGET}"
cd "${SOURCE}" || exit 1
# Select all files to move
grep -ircl --exclude=*.{png,jpg,gif,html,jar} '^#\!' "${SOURCE}" > "${LIST}"
find "${SOURCE}" -type f \( -perm -u+x -o -name "*.sh" \) -print >> "${LIST}"
# Feed them to Awk that will log and move the file
sort "${LIST}" | uniq | awk -v LOGFILE="${LOG}" -v TARGET="${TARGET}" '
{ print "Moving " $0 >> LOGFILE
"mv -v --backup \"" $0 "\" " TARGET | getline
print >> LOGFILE }'
I want to get the total count of the number of lines from all the files returned by the following command:
shell> find . -name *.info
All the .info files are nested in sub-directories so I can't simply do:
shell> wc -l *.info
Am sure this should be in any bash users repertoire, but am stuck!
Thanks
wc -l `find . -name *.info`
If you just want the total, use
wc -l `find . -name *.info` | tail -1
Edit: Piping to xargs also works, and hopefully can avoid the 'command line too long'.
find . -name *.info | xargs wc -l
You can use xargs like so:
find . -name *.info -print0 | xargs -0 cat | wc -l
some googling turns up
find /topleveldirectory/ -type f -exec wc -l {} \; | awk '{total += $1} END{print total}'
which seems to do the trick
#!/bin/bash
# bash 4.0
shopt -s globstar
sum=0
for file in **/*.info
do
if [ -f "$file" ];then
s=$(wc -l< "$file")
sum=$((sum+s))
fi
done
echo "Total: $sum"
find . -name "*.info" -exec wc -l {} \;
Note to self - read the question
find . -name "*.info" -exec cat {} \; | wc -l
# for a speed-up use: find ... -exec ... '{}' + | ...
find . -type f -name "*.info" -exec sed -n '$=' '{}' + | awk '{total += $0} END{print total}'