I'm working on a project where I need to cat out the images and videos Resolution if they are not equal script should exit.
I've used command to cat the image Resolution. In this example I've 2 images. In a directory
find $PWD -iname "*.jpg" -type f -exec identify -format '%i %wx%h\n' '{}' \;|awk '{print $NF}'
OUTPUT 1280x720
640x362
I want them both to match if the file size is say it should say Okay else Check the file resolution and exit.
I tried the command to convert the output in to two variables a1 and a2. But it is not working. I tried to copy paste the output to the terminal than it is working. Please help
find $PWD -iname "*.jpg" -type f -exec identify -format '%i %wx%h\n' '{}' \;|awk '{print $NF}'|egrep -n "x"|sed 's#:#=#g'|sed 's#^#a#g'|paste -sd ";"|bash
You did
... | awk '{print $NF}'
and got
1280x720
640x362
I would harness GNU AWK as follows to check if all sizes match
... | awk '{arr[$NF]+=1}END{print length(arr)==1?"Everything match":"At least one mismatch"}'
Which will print either Everything match or At least one mismatch. Explanation: I use array arr for every encountered resolution I increase value for that resolution by 1. At END if arr has exactly 1 distinct key I print Everything match otherwise At least one mismatch using ternany operator i.e. condition?valueiftrue:valueiffalse.
(tested in gawk 4.2.1)
I'm able to find all files in a directory and its subdirectories. I save it into an array but I also need to rename ones that have spaces in their names to an underscore.
Sample structure
./abc 123.txt
./dags/ftp.pyc
./dags/ftp.py
./logs/scheduler/2017-05-12/ftp.py.log
Find the files and insert into array
array=($(find . -type f -print0 | xargs -0))
# Does not work
for i in ${array[#]};do echo ${i// /_};done
#Output
./abc
123.txt
./dags/ftp.pyc
./dags/ftp.py
./logs/scheduler/2017-05-12/ftp.py.log
It would be more ideal if I could run regex against the value before it goes into the array.
The problem with the above command is that the array is looping through the filename with space as two variables and not one.
Something like below should work
find . -type f -print0 | while IFS= read -r -d '' file; do echo ${file// /_} ; done
I have an Xcode workspace with several hundred png files and would like to list those which are unreferenced.
Example pngs:
capture-bg-1.png
capture-bg-1#2x.png
promo_icon.png
promo_icon#2x.png
Reference example for "promo_icon" (XML file):
<string>promo_icon</string>
Reference example for "promo_icon" (Objective-C):
[UIImage imageNamed:#"promo_icon"]
I want to get a list of filenames including "capture-bg-1" (presuming it has no matches like "promo_icon" does).
A little wrinkle is that there is a .pbxproj file (XML) that has a reference to every png file in the workspace so that file needs to be excluded from the search.
The following command gets all the unique filename parts (excluding folder and everything after '#' and '.') for evaluation.
find . -name *.png -exec basename {} \;| sed 's/[.#].*$//' | uniq
The grep part into which I would pipe the filename parts is the problem. This grep finds the files that do or do not reference 'promo_icon' and lists the references. An empty return value (no references) would be a png file I'm looking for a list of:
grep -I -R promo_icon . | grep -v pbxproj
However I can't figure out how to combine the two in a functional way. There is this snippet (https://stackoverflow.com/a/16258198/26235) for doing this in sh but it doesn't work.
An easier way to do this might be to put the list of all PNG names into one file, one per line. Then put the list of all references to PNG names into another file, one per line. Then grep -v -f the first file against the second. Whatever is returned is your answer.
First,
find . -name '*.png' -printf %f | sed -e 's/[.#].*$//' | sort -u > pngList
Then,
grep -RI --exclude .pbxproj -e '<string>.*png</string>' \
-e 'UIImage imageNamed' . > pngRefs
Finally,
grep -v -f pngList pngRefs
And you can clean up the results with sed and sort -u from there.
::edit::
The above approach could produce some wrong answers if you have any PNGs whose names are proper substrings of other PNGs. For example, if you have promo_icon and cheese_promo_icon and promo_icon is never referenced but cheese_promo_icon is referenced, the above approach will not detect that promo_icon is unreferenced.
To deal with this problem, you can surround your PNG name patterns with \b (word-boundary) sequences:
find . -name '*.png' -printf %f | sed -e 's/^/\\b/' -e 's/$/\\b/' -e 's/[.#].*$//' | sort -u > pngList
This way your pngList file will contain lines like this:
\bpromo_icon\b
\bcapture-bg-1\b
so when you grep it against the list of references it will only match when the name of each PNG is the entire name in the image ref (and not a substring of a longer name).
This is the script that finds unreferenced images in an Xcode project. One gotcha is that people may use string formatting to construct references to images and that's unaccounted for here. Mac users will want to install findutils via brew to get a version of find with printf:
#!/bin/sh
# Finds unreferenced PNG assets in an xcode project
# Get a list of png file stems, stripping out folder information, 'png' extension
# and '#2x' parts of the filename
for png in `find . -name '*.png' -printf '%f\n' | sed -e 's/[.#].*$//' | sort -u`
# Loop through the files and print out a list of files not referenced. Keep in mind
# that some files like 'asset-1' may be referred to in code like 'asset-%d' so be careful
do
name=`basename $png`
if ! grep -qRI --exclude project.pbxproj --exclude-dir Podfile $png . ; then
echo "$png is not referenced"
fi
done
I want to run a find command that will find a certain list of files and then iterate through that list of files to run some operations. I also want to find the total size of all the files in that list.
I'd like to make the list of files FIRST, then do the other operations. Is there an easy way I can report just the total size of all the files in the list?
In essence I am trying to find a one-liner for the 'total_size' variable in the code snippet below:
#!/bin/bash
loc_to_look='/foo/bar/location'
file_list=$(find $loc_to_look -type f -name "*.dat" -size +100M)
total_size=???
echo 'total size of all files is: '$total_size
for file in $file_list; do
# do a bunch of operations
done
You should simply be able to pass $file_list to du:
du -ch $file_list | tail -1 | cut -f 1
du options:
-c display a total
-h human readable (i.e. 17M)
du will print an entry for each file, followed by the total (with -c), so we use tail -1 to trim to only the last line and cut -f 1 to trim that line to only the first column.
Methods explained here have hidden bug. When file list is long, then it exceeds limit of shell comand size. Better use this one using du:
find <some_directories> <filters> -print0 | du <options> --files0-from=- --total -s|tail -1
find produces null ended file list, du takes it from stdin and counts.
this is independent of shell command size limit.
Of course, you can add to du some switches to get logical file size, because by default du told you how physical much space files will take.
But I think it is not question for programmers, but for unix admins :) then for stackoverflow this is out of topic.
This code adds up all the bytes from the trusty ls for all files (it excludes all directories... apparently they're 8kb per folder/directory)
cd /; find -type f -exec ls -s \; | awk '{sum+=$1;} END {print sum/1000;}'
Note: Execute as root. Result in megabytes.
The problem with du is that it adds up the size of the directory nodes as well. It is an issue when you want to sum up only the file sizes. (Btw., I feel strange that du has no option for ignoring the directories.)
In order to add the size of files under the current directory (recursively), I use the following command:
ls -laUR | grep -e "^\-" | tr -s " " | cut -d " " -f5 | awk '{sum+=$1} END {print sum}'
How it works: it lists all the files recursively ("R"), including the hidden files ("a") showing their file size ("l") and without ordering them ("U"). (This can be a thing when you have many files in the directories.) Then, we keep only the lines that start with "-" (these are the regular files, so we ignore directories and other stuffs). Then we merge the subsequent spaces into one so that the lines of the tabular aligned output of ls becomes a single-space-separated list of fields in each line. Then we cut the 5th field of each line, which stores the file size. The awk script sums these values up into the sum variable and prints the results.
ls -l | tr -s ' ' | cut -d ' ' -f <field number> is something I use a lot.
The 5th field is the size. Put that command in a for loop and add the size to an accumulator and you'll get the total size of all the files in a directory. Easier than learning AWK. Plus in the command substitution part, you can grep to limit what you're looking for (^- for files, and so on).
total=0
for size in $(ls -l | tr -s ' ' | cut -d ' ' -f 5) ; do
total=$(( ${total} + ${size} ))
done
echo ${total}
The method provided by #Znik helps with the bug encountered when the file list is too long.
However, on Solaris (which is a Unix), du does not have the -c or --total option, so it seems there is a need for a counter to accumulate file sizes.
In addition, if your file names contain special characters, this will not go too well through the pipe (Properly escaping output from pipe in xargs
).
Based on the initial question, the following works on Solaris (with a small amendment to the way the variable is created):
file_list=($(find $loc_to_look -type f -name "*.dat" -size +100M))
printf '%s\0' "${file_list[#]}" | xargs -0 du -k | awk '{total=total+$1} END {print total}'
The output is in KiB.
Sorry for the long title. I'm trying to basically write a script that will do a "find" and get a sorted list of all files named README and print out a section of text in them. It's an easy way for me to go to a directory which has a number of project folders and print out summaries. This is what I have so far:
find . -name "README" | xargs -I {} sed -n '/---/,/NOTES/p' {}
I can't seem to get this to be sorted by modified date. Any help would be great!
You can use the -printf option in find:
$ find . -name 'README' -printf '%T#\t%p\n' | sort | cut -f 2-