i would like to know how would you find how many variables are stored in a variable..for example:
dir=$(ls)
dir would store the names of all the files in the directory and i know you can cycle through them with a while-do loop. but is there a way for me to know how many variables (file names in this case) are stored in the variable (dir)?
thanks
with bash, this is a more reliable way to collect the contents of the directory into an array
dir=(*)
That will handle elements with spaces properly.
A clean way would be to use bash arrays.
dir=($(ls))
EDIT: The above does not handle spaces properly, see Glenn's answer for the correct way, which still creates a bash array:
dir=(*)
Notice the additional braces above. i.e. I declared dir to be an array, rather than a single variable.
Then you can simply get the number of elements in dir (the array):
echo ${#dir[#]}
You can access the individual elements:
echo ${dir[0]}
echo ${dir[5]}
More details here: http://tldp.org/LDP/abs/html/arrays.html
ls | wc -l
or
echo "$dir" | wc -l
gives you an approximation. (It doesn't work well when filenames contain newlines.)
The reliable way to find out the number of files in a directory is
find . -maxdepth 1 -print0 | tr '\n\0' '_\n' | wc -l
but that includes the hidden files, which ls doesn't list by default.
A Bash variable is a string or an array of strings, nothing else. To check how many words are stored in a string variable:
$ echo "foo bar baz" | wc -w
3
To figure out how many files exist in the current directory, #glenn has the right idea. To figure out how many files exist matching an arbitrary find command, just use this:
$ find [your options] -printf x | wc -c
Related
I have a number of files with a piece of useful information in their names that I want to extract as a variable and use in a subsequent step. The structure of the file names is samplename_usefulbit_junk. I'm attempting to loop through these files using a predictable portion of the file name (samplename), store the whole name in a variable, and use sed to extract the useful bit. It does not work.
samples="sample1 sample2 sample3"
for i in $samples; do
filename="$(find ./$FILE_DIR -maxdepth 1 -name '$(i)*' -printf '%f\n')"
usefulbit="$(find ./$FILE_DIR -maxdepth 1 -name '$(i)*' -printf '%f\n' | sed 's/.*samplename//g' | sed 's/junk.*//g')"
(More steps using $usefulbit or $(usefulbit) or ${usefulbit} or something)
done
find ./$FILE_DIR -maxdepth 1 -name 'sample1*' -printf '%f\n' and find ./$FILE_DIR -maxdepth 1 -name "sample1*" -printf '%f\n' both work, but no combination of parentheses, curly brackets, or single-, double-, or backquotes has got the loop to work. Where is this going wrong?
Try this:
for file in `ls *_*_*.*`
do
echo "Full file name is: $file"
predictable_portion_filename=${file_%%_*}
echo "predictable portion in the filename is: ${predictable_portion_filename}"
echo "---"
done
PS: $variable or ${variable} or "${variable}" or "$variable" are different than $(variable) as in the last case, $( ... ) makes a sub-shell and treats anything inside as a command i.e. $(variable) will make the sub-shell to execute a command named variable
In place of ls __., you can also use (to recursively find all files with that standard file name): ls -1R *_*_*.*
In place of using ${file%%_*} you can also use: echo ${file} | cut -d'_' -f1 to get the predictable value. You can use various other ways as well (awk, sed/ etc).
Excuse me, i can't do it with bash, may i show you another approach? Here is a shell (lua-shell) i am developing, and a demo as a solution for your case:
wws$ `ls ./demo/2
sample1_red_xx.png sample2_green_xx.png sample3_blue_xx.png
wws$ source ./demo/2.lua
sample1_red_xx.png: red
sample2_green_xx.png: green
sample3_blue_xx.png: blue
wws$
I really want to know your whole plan , unless you need bash as the only tool...
Er, i fogot to paste the script:
samples={"sample1", "sample2", "sample3"}
files = lfs.collect("./demo/2")
function get_filename(prefix)
for i, file in pairs(files) do
if string.match(file.name, prefix) then return file.name end
end
end
for i = 1, #samples do
local filename = get_filename(samples[i])
vim:set(filename)
:f_lvf_hy
print(filename ..": ".. vim:clipboard())
end
The 'get_filename()' seems a little verbose... i haven't finished the lfs component.
I'm not sure whether answering my own question with my final solution is proper stackoverflow etiquette, but this is what ultimately worked for me:
for i in directory/*.ext; do
myfile="$i"
name="$(echo $i | sed 's!.*/!!g' | sed 's/_junk*.ext//g')"
some other steps
done
This way I start with the file name already a variable (in a variable?) and don't have to struggle with find and its strong opinions. It also spares me from having to make a list of sample names.
The first sed removes the directory/ and the second removes the end of the file name and extension, leaving a variable $name that I use as a prefix when generating other files in subsequent steps. So much simpler!
New to UNIX, currently learning UNIX via secureshell in a class. We've been given a few basic assignments such as creating loops and finding files. Our last assignment asked us to
write code that will estimate the number of shell scripts in the current directory and then print out that total number as "Estimated number of shell script files in this directory:"
Unlike in our previous assignments we are now allowed to use conditional loops, we are encouraged to use grep and wc statements.
On a basic level I know I can enter
ls * .sh
to find all shell scripts in the current directory. Unfortunately, this doesn't estimate the total number or use grep. Hence my question, I imagine he wants us to go
grep -f .sh (or something)
but I'm not exactly sure if I am on the right path and would greatly appreciate any help.
Thank You
You can do it like:
echo "Estimated number of shell script files in this directory:" `ls *.sh | wc -l`
I'd do it this way:
find . -executable -execdir file {} + | egrep '\.sh: | Bourne| bash' | wc -l
Find all files in the current directory (.) which are executable.
For each file, run the file(1) command, which tries to guess what type of file it is (not perfect).
Grep for known patterns: filenames ending with .sh, or file types containing "Bourne" or "bash".
Count lines.
Huhu, there's a trap, .sh file are not always shell script as the extension is not mandatory.
What tells you this is a shell script will be the Shebang #!/bin/*sh ( I put a * as it could be bash, csh, tcsh, zsh, which are shells) at top of line, hence the hint to use grep, so the best answer would be:
grep '^#!/bin/.*sh' * | wc -l
This give output:
sensible-pager:#!/bin/sh
service:#!/bin/sh
shelltest:#!/bin/bash
smbtar:#!/bin/sh
grep works with regular expression by default, so the match #!/bin/.*sh will match files with a line starting (the ^) by #!/bin/ followed by 0 or unlimited characters .* followed by sh
You may test regex and get explanation of them on http://regex101.com
Piping the result to wc -l to get the number of files containing this.
To display the result, backticks or $() in an echo line is ok.
grep -l <string> *
will return a list of all files that contain in the current directory. Pipe that output into wc -l and you have your answer.
Easiest way:
ls | grep .sh > tmp
wc tmp
That will print the number of lines, bytes and charcters of 'tmp' file. But in 'tmp' there's a line for each *.sh file in your working directory. So the number of lines will give an estimated number of shell scripts you have.
wc tmp | awk '{print $1}' # Using awk to filter that output like...
wc -l tmp # Which it returns the number of lines follow by the name of file
But as many people say, the only certain way to know a file is a shell script is by taking a look at the first line an see if there is #!/bin/bash. If you wanna develop it that way, keep in mind:
cat possible_script.x | head -n1 # That will give you the first line.
I am new with shell script.
I need to save the number of files with particular extension(.properties) in a variable using shell script.
I have used
ls |grep .properties$ |wc -l
but this command prints the number of properties files in the folder. How can I assign this value in a variable.
I have tried
count=${ls |grep .properties$ |wc -l}
But it is showing error like:
./replicate.sh: line 57: ${ls |grep .properties$ |wc -l}: bad substitution
What is this type of errors?
Please anyone help me to save the number of particular files in a variable for future use.
You're using the wrong brackets, it should be $() (command output substitution) rather than ${} (variable substitution).
count=$(ls -1 | grep '\.properties$' | wc -l)
You'll also notice I've use ls -1 to force one file per line in case your ls doesn't do this automatically for pipelines, and changed the pattern to match the . correctly.
You can also bypass the grep totally if you use something like:
count=$(ls -1 *.properties 2>/dev/null | wc -l)
Just watch out for "evil" filenames like those with embedded newlines for example, though my ls seems to handle these fine by replacing the newline with a ? character - that's not necessarily a good idea for doing things with files but it works okay for counting them.
There are better tools to use if you have such beasts and you need the actual file name, but they're rare enough that you generally don't have to worry about it.
You could use a loop with globbing:
count=0
for i in *.properties; do
count=$((count+1))
done
If you are using a shell that supports arrays, you can simply capture all such file names
files=( *.properties )
and then determine the number of array elements
count=${#files[#]}
(The above assumes bash; other shells may require slightly different syntax.)
You'd better use find instead of parsing ls. Then, use the var=$(command) syntax to store the value.
var=$(find . -maxdepth 1 -name "*\.properties" | wc -l)
Reference: Why you shouldn't parse the output of ls.
To solve the problem appearing if any file name contains new lines, you can use what chepner suggests in the comments:
var=$(find . -maxdepth 1 -name "*\.properties" -exec 'echo 1' | wc -l)
so that for every match it will print not the name, but any random character (in this case, 1) and then the amount of them will be counted to produce the correct output.
Use:
count=`ls|grep .properties$ | wc -l`
echo $count
You could write your assignment like this:
count=$(ls -q | grep -c '\.properties$')
or
count=$(ls -qA | grep -c '\.properties$')
if you want to include hidden files.
This works with all kind of filenames because we're using ls with q.
Sure it's easier to link to some webpage that tells you to "never parse ls" than to read the ls manual and see there's a q option (and that most implementations default to q if the output is to a terminal device which explains why some people here state their ls seems to handle filenames with newlines just fine by replacing the newline with a ? character).
Parsing output of ls to iterate through list of files is bad. So how should I go about iterating through list of files in order by which they were first created? I browsed several questions here on SO and they all seem to parsing ls.
The embedded link suggests:
Things get more difficult if you wanted some specific sorting that
only ls can do, such as ordering by mtime. If you want the oldest or
newest file in a directory, don't use ls -t | head -1 -- read Bash FAQ
99 instead. If you truly need a list of all the files in a directory
in order by mtime so that you can process them in sequence, switch to
perl, and have your perl program do its own directory opening and
sorting. Then do the processing in the perl program, or -- worst case
scenario -- have the perl program spit out the filenames with NUL
delimiters.
Even better, put the modification time in the filename, in YYYYMMDD
format, so that glob order is also mtime order. Then you don't need ls
or perl or anything. (The vast majority of cases where people want the
oldest or newest file in a directory can be solved just by doing
this.)
Does that mean there is no native way of doing it in bash? I don't have the liberty to modify the filename to include the time in them. I need to schedule a script in cron that would run every 5 minutes, generate an array containing all the files in a particular directory ordered by their creation time and perform some actions on the filenames and move them to another location.
The following worked but only because I don't have funny filenames. The files are created by a server so it will never have special characters, spaces, newlines etc.
files=( $(ls -1tr) )
I can write a perl script that would do what I need but I would appreciate if someone can suggest the right way to do it in bash. Portable option would be great but solution using latest GNU utilities will not be a problem either.
sorthelper=();
for file in *; do
# We need something that can easily be sorted.
# Here, we use "<date><filename>".
# Note that this works with any special characters in filenames
sorthelper+=("$(stat -n -f "%Sm%N" -t "%Y%m%d%H%M%S" -- "$file")"); # Mac OS X only
# or
sorthelper+=("$(stat --printf "%Y %n" -- "$file")"); # Linux only
done;
sorted=();
while read -d $'\0' elem; do
# this strips away the first 14 characters (<date>)
sorted+=("${elem:14}");
done < <(printf '%s\0' "${sorthelper[#]}" | sort -z)
for file in "${sorted[#]}"; do
# do your stuff...
echo "$file";
done;
Other than sort and stat, all commands are actual native Bash commands (builtins)*. If you really want, you can implement your own sort using Bash builtins only, but I see no way of getting rid of stat.
The important parts are read -d $'\0', printf '%s\0' and sort -z. All these commands are used with their null-delimiter options, which means that any filename can be procesed safely. Also, the use of double-quotes in "$file" and "${anarray[*]}" is essential.
*Many people feel that the GNU tools are somehow part of Bash, but technically they're not. So, stat and sort are just as non-native as perl.
With all of the cautions and warnings against using ls to parse a directory notwithstanding, we have all found ourselves in this situation. If you do find yourself needing sorted directory input, then about the cleanest use of ls to feed your loop is ls -opts | read -r name; do... This will handle spaces in filenames, etc.. without requiring a reset of IFS due to the nature of read itself. Example:
ls -1rt | while read -r fname; do # where '1' is ONE not little 'L'
So do look for cleaner solutions avoiding ls, but if push comes to shove, ls -opts can be used sparingly without the sky falling or dragons plucking your eyes out.
let me add the disclaimer to keep everyone happy. If you like newlines inside your filenames -- then do not use ls to populate a loop. If you do not have newlines inside your filenames, there are no other adverse side-effects.
Contra: TLDP Bash Howto Intro:
#!/bin/bash
for i in $( ls ); do
echo item: $i
done
It appears that SO users do not know what the use of contra means -- please look it up before downvoting.
You can try using use stat command piped with sort:
stat -c '%Y %n' * | sort -t ' ' -nk1 | cut -d ' ' -f2-
Update: To deal with filename with newlines we can use %N format in stat andInstead of cut we can use awk like this:
LANG=C stat -c '%Y^A%N' *| sort -t '^A' -nk1| awk -F '^A' '{print substr($2,2,length($2)-2)}'
Use of LANG=C is needed to make sure stat uses single quotes only in quoting file names.
^A is conrtrol-A character typed using ControlVA keys together.
How about a solution with GNU find + sed + sort?
As long as there are no newlines in the file name, this should work:
find . -type f -printf '%T# %p\n' | sort -k 1nr | sed 's/^[^ ]* //'
It may be a little more work to ensure it is installed (it may already be, though), but using zsh instead of bash for this script makes a lot of sense. The filename globbing capabilities are much richer, while still using a sh-like language.
files=( *(oc) )
will create an array whose entries are all the file names in the current directory, but sorted by change time. (Use a capital O instead to reverse the sort order). This will include directories, but you can limit the match to regular files (similar to the -type f predicate to find):
files=( *(.oc) )
find is needed far less often in zsh scripts, because most of its uses are covered by the various glob flags and qualifiers available.
I've just found a way to do it with bash and ls (GNU).
Suppose you want to iterate through the filenames sorted by modification time (-t):
while read -r fname; do
fname=${fname:1:((${#fname}-2))} # remove the leading and trailing "
fname=${fname//\\\"/\"} # removed the \ before any embedded "
fname=$(echo -e "$fname") # interpret the escaped characters
file "$fname" # replace (YOU) `file` with anything
done < <(ls -At --quoting-style=c)
Explanation
Given some filenames with special characters, this is the ls output:
$ ls -A
filename with spaces .hidden_filename filename?with_a_tab filename?with_a_newline filename_"with_double_quotes"
$ ls -At --quoting-style=c
".hidden_filename" " filename with spaces " "filename_\"with_double_quotes\"" "filename\nwith_a_newline" "filename\twith_a_tab"
So you have to process a little each filename to get the actual one. Recalling:
${fname:1:((${#fname}-2))} # remove the leading and trailing "
# ".hidden_filename" -> .hidden_filename
${fname//\\\"/\"} # removed the \ before any embedded "
# filename_\"with_double_quotes\" -> filename_"with_double_quotes"
$(echo -e "$fname") # interpret the escaped characters
# filename\twith_a_tab -> filename with_a_tab
Example
$ ./script.sh
.hidden_filename: empty
filename with spaces : empty
filename_"with_double_quotes": empty
filename
with_a_newline: empty
filename with_a_tab: empty
As seen, file (or the command you want) interprets well each filename.
Each file has three timestamps:
Access time: the file was opened and read. Also known as atime.
Modification time: the file was written to. Also known as mtime.
Inode modification time: the file's status was changed, such as the file had a new hard link created, or an existing one removed; or if the file's permissions were chmod-ed, or a few other things. Also known as ctime.
Neither one represents the time the file was created, that information is not saved anywhere. At file creation time, all three timestamps are initialized, and then each one gets updated appropriately, when the file is read, or written to, or when a file's permissions are chmoded, or a hard link created or destroyed.
So, you can't really list the files according to their file creation time, because the file creation time isn't saved anywhere. The closest match would be the inode modification time.
See the descriptions of the -t, -u, -c, and -r options in the ls(1) man page for more information on how to list files in atime, mtime, or ctime order.
Here's a way using stat with an associative array.
n=0
declare -A arr
for file in *; do
# modified=$(stat -f "%m" "$file") # For use with BSD/OS X
modified=$(stat -c "%Y" "$file") # For use with GNU/Linux
# Ensure stat timestamp is unique
if [[ $modified == *"${!arr[#]}"* ]]; then
modified=${modified}.$n
((n++))
fi
arr[$modified]="$file"
done
files=()
for index in $(IFS=$'\n'; echo "${!arr[*]}" | sort -n); do
files+=("${arr[$index]}")
done
Since sort sorts lines, $(IFS=$'\n'; echo "${!arr[*]}" | sort -n) ensures the indices of the associative array get sorted by setting the field separator in the subshell to a newline.
The quoting at arr[$modified]="${file}" and files+=("${arr[$index]}") ensures that file names with caveats like a newline are preserved.
I'm running a shell script like above:
vQtde=`ssh user#server 'ls -lrt /mnta2/gvt/Interfaces/output/BI/sent/*.?${vDiaAnterior}* | grep "${vMDAtual}0[345678]:" |wc -l'`
And the return is on error: ksh: /usr/bin/sh: arg list too long
I know that the same script in local server return 9, how can I escape "" in remote grep ?
The variables are:
vDiaAtual=`date +%d`
vMesAtual=`date +%b`
vMDAtual=" $vMesAtual $vDiaAtual ";
vDiaAnterior=120614
The problem here is not with grep. The problem is following: the argument /mnta2/gvt/Interfaces/output/BI/sent/*.?${vDiaAnterior}* is expanded by shell (by ksh in the case) and the resulting list is too big.
It would be better to do simply ls -lrt /mnta2/gvt/Interfaces/output/BI/sent/ and then add additional grep after it.
Something like:
ls -lrt /mnta2/gvt/Interfaces/output/BI/sent/ | grep "\..${vDiaAnterior}" | grep ...
Based on information regarding that error message, I'm not sure if escaping the quotes is the real issue here.
What is it that you're ultimately trying to do? There's probably a slightly different way to approach it that avoids this problem. It appears that you're trying to count the number of files with a certain "last modified" date. Is this accurate? If so, I highly recommend against using the output of ls to do that. The output is inconsistent between platforms and can even change between versions. The find utility is much better suited for this sort of thing.
Try something like this instead:
dir=/mnta2/gvt/Interfaces/output/BI/sent/
pattern="*.?${vDiaAnterior}*"
time= # Fill this in based on the "last modified" time that you're looking for
find $dir -iname "$pattern" -mtime $time -exec printf '.' \; | wc -c
You can omit using the extra variables, they're only there to make the code more readable on the webpage.
This will search the given directory for all files with names that match the specified wildcard pattern and with "last modified" times that match whatever you specify. For each match found, the code printf '.' (which prints one dot to stdout) will be run. wc then counts the number of dot characters, which will be equal to the number of matching files found. The benefit of this method is that it minimizes the amount of data that needs to be piped between programs (including between the shell and ls). find handles the wildcard matching internally instead of requiring the shell to expand the wildcard and pass the result to ls. You're also only sending one character per matching file to wc instead of one long line of ls output per match. That should reduce the chances that you encounter the "arg list too long" error.
I resolved the problem with this ways:
- Create a file .sh in local server receiving a parameters:
#!/usr/local/bin/bash
vDiaAnterior="${1}";
vMDAtual="${2}";
ls -l /mnta2/gvt/Interfaces/output/BI/sent/*.?${vDiaAnterior}AMA | grep "${vMDAtual}[345678]:" | wc -l;
Call remote :
ssh user#server ". /mnta1/prod_med1/scriptsf/ver_jobs_3_horas.sh $vDiaAnterior '$vMDAtual'"
Result: 9 Files.
Best Regards,
Cauca