This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Unix for loop help please?
I am trying to list the names of all the files in a directory separated by a blank line. I was using a for loop but after trying a few examples, none really work by adding blank lines in between. Any ideas?
Is there any command which outputs only the first line of a file in unix? How could I only display the first line?
for i in ls
do
echo "\n" && ls -l
done
for i in ls
do
echo "\n"
ls
done
Use head or sed 1q to display only the first line of a file. But in this case, if I'm understanding you correctly, you want to capture and modify the output of ls.
ls -l | while read f; do
printf '%s\n\n' "$f"
# alternately
echo "$f"; echo
done
IFS="
"
for i in $(ls /dir/name/here/or/not)
do
echo -e "$i\n"
done
To see the first part of a file use head and for the end of a file use tail (of course). The command head -n 1 filename will display the first line. Use man head to get more options. (I know how that sounds).
Use shell expansion instead of ls to list files.
for file in *
do
echo "$file"
echo
if [ -f "$file" ];then
read firstline < "$file"
echo "$firstline" # read first line
fi
done
Related
Hi guys I got this bash one line that i wish to make a script
for i in 'ls *.fastq.gz'; do echo $(zcat ${i} | wc -l)/4|bc; done
I would like to make it as a script to read from a data dir and print out the result with the name of the file.
I tried to put the dir in front of the 'data/*.fastq.gz' but got am error No such dir exist...
I would like some like this:
name1.fastq.gz 1898516
name2.fastq.gz 2467421
namen.fastq.gz 1234532
I am not experienced in bash.
Could you guys give a help?
Thanks
Take the dir as an argument, but default to the current dir if it's not set.
dir="${1-.}"
Then put it in the glob: "$dir"/*.fastq.gz
As well:
Quote variables and command expansions.
Don't parse ls.
Don't trust echo with arbitrary data (filenames). Use printf instead.
Use an end-of-options flag -- when giving filenames to commands.
I prefer to not have any inline command expansions, but that's just personal preference
Putting it together:
#!/bin/bash
dir="${1-.}"
for file in "$dir"/*.fastq.gz; do
printf '%s ' "$file"
lines="$(zcat -- "$file" | wc -l)"
bc <<< "$lines/4" # Using a here-string (Bash feature)
done
There is no need to escape to bc for integer math (divide by 4), or to use 'ls' to enumerate the files. The original version will do with minor changes:
#!/bin/bash
dir="${1-.}"
for i in "$dir"/*.fastq.gz; do
lines=$(zcat "${i}" | wc -l)
printf '%s %d\n' "$i" "$((lines/4))"
done
This question already has answers here:
Shell Scripting -Help needed [closed]
(2 answers)
Closed 8 years ago.
The script should read each file in the path and replace the string in each single row.How to create temp file and mv replace while i am iterating 10 diff input files name in the same path
Pls advice
SunOS 5.10
FILES=/export/home/*.txt
for f in $FILES
do
echo "Processing $f file..."
cat $f | awk 'BEGIN {FS="|"; OFS="|"} {$8=substr($8, 1, 6)"XXXXXXXXXX\""; print}'
done
input file
"2013-04-30"|"X"|"0000628"|"15000231"|"1999-12-05"|"ST"|"2455525445552000"|"1111-11-11"|75.00|"XXE11111"|"224425"
"2013-04-30"|"Y"|"0000928"|"95000232"|"1999-12-05"|"VT"|"2455525445552000"|"1111-11-11"|95.00|"VVE11111"|"224425"
output file
"2013-04-30"|"X"|"0000628"|"15000231"|"1999-12-05"|"ST"|"245552xxxxxxxxxx"|"1111-11-11"|75.00|"XXE11111"|"224425"
"2013-04-30"|"Y"|"0000928"|"95000232"|"1999-12-05"|"VT"|"245552xxxxxxxxxx"|"1111-11-11"|95.00|"VVE11111"|"224425"
Not sure how use this
cat $f | awk 'BEGIN {FS="|"; OFS="|"} {$8=substr($8, 1, 6)"XXXXXXXXXX\""; print}' $f > tmp.txt && mv tmp.txt $f
To achieve what appears to be your desired end result you can use ed instead of awk.
FILES=/export/home/*.txt
for f in $FILES; do
echo "Processing ${f} file..."
ed "${f}" <<EOF
% s/\([0-9]\{6\}\)[0-9]\{10\}/\1xxxxxxxxxx/
w
q
EOF
done
This requires fewer steps (ie command calls) because you're not creating a temp file and moving it. Instead you're editing the file in place with the desired changes and then closing it.
% means "operate on every line" (don't worry about lines that don't match)
s means "perform a substitution" -- /[pattern]/[replacement]/
w means write
q means quit
EOF closes out the "here document"
Hope that helps.
Edit Note: Charles Duffy pointed out that ed ${f} would fail on files names with spaces in them and ed "${f}" would not suffer from that particular deficiency. This is true. It's also the case, however, that the for loop above would likely split on any spaces in the file names. You can set IFS (IFS='\n') to get around this limitation on KSH, BASH, MKSH, ASH, and DASH. In ZSH (depending on your version) you may need to set SH_WORD_SPLIT. As an alternative you can change from a for loop to a while loop with read:
FILES=/export/home/*.txt
ls ${FILES} | while read f; do
echo "Processing ${f} file..."
ed "${f}" <<-EOF
% s/\([0-9]\{6\}\)[0-9]\{10\}/\1xxxxxxxxxx/
w
q
EOF
done
Edit Note: My erroneous statements above stricken but kept for historical purposes. See comments from Charles Duffy (below) for clarification.
I did this script
#!/bin/bash
liste=`ls -l`
for i in $liste
do
echo $i
done
The problem is I want the script displays each result line by line, but it displays word by word :
I have :
my_name
etud
4096
Oct
8
10:13
and I want to have :
my_name etud 4096 Oct 8 10:13
The final aim of the script is to analyze each line ; it is the reason I want to be able to recover the entire line. Maybe the list is not the best solution but I don't know how to recover the lines.
To start, we'll assume that none of your filenames ever contain newlines:
ls -l | IFS= while read -r line; do
echo "$line"
# Do whatever else you want with $line
done
If your filenames could contain newlines, things get tricky. In this case, it's better (although slower) to use stat to retrieve the desired metadata from each file individually. Consult man stat for details about how your local variety of stat works, as it is unfortunately not very standardized.
for f in *; do
line=$(stat -c "%U %n %s %y" "$f") # One possibility
# Work with $line as if it came from ls -l
done
You can replace
echo $i
with
echo -n "$i "
echo -n outputs to console without newline.
Another to do it with a while loop and without a pipe:
#!/bin/bash
while read line
do
echo "line: $line"
done < <(ls -l)
First, I hope that you aren't genuinely using ls in your real code, but only using it as an example. If you want a list of files, ls is the wrong tool; see http://mywiki.wooledge.org/ParsingLs for details.
Second, modern versions of bash have a builtin called readarray.
Try this:
readarray -t my_array < <(ls -l)
for entry in "${my_array[#]}"; do
read -a pieces <<<"$entry"
printf '<%s> ' "${pieces[#]}"; echo
done
First, it creates an array (called my_array) with all the output from the command being run.
Then, for each line in that output, it creates an array called pieces, and emits each piece with arrow brackets around them.
If you want to read a line at a time, rather than reading the entire file at once, see http://mywiki.wooledge.org/BashFAQ/001 ("How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?")
Joinning the previous answers with the need to store the list of files in a variable. You can do this
echo -n "$list"|while read -r lin
do
echo $lin
done
This question already has answers here:
Looping through the content of a file in Bash
(16 answers)
Closed 3 years ago.
Say for example I have a file called "tests",it contains
a
b
c
d
I'm trying to read this file line by line and it should output
a
b
c
d
I create a bash script called "read" and try to read this file by using for loop
#!/bin/bash
for i in ${1}; do //for the ith line of the first argument, do...
echo $i // prints ith line
done
I execute it
./read tests
but it gives me
tests
Does anyone know what happened? Why does it print "tests" instead of the content of the "tests"? Thanks in advance.
#!/bin/bash
while IFS= read -r line; do
echo "$line"
done < "$1"
This solution can handle files with special characters in the file name (like spaces or carriage returns) unlike other responses.
You need something like this rather:
#!/bin/bash
while read line || [[ $line ]]; do
echo $line
done < ${1}
what you've written after expansion will become:
#!/bin/bash
for i in tests; do
echo $i
done
if you still want for loop, do something like:
#!/bin/bash
for i in $(cat ${1}); do
echo $i
done
This works for me:
#!/bin/sh
for i in `cat $1`
do
echo $i
done
Say I have a bash script as follows
while
read $f;
do
cat $f >> output.txt;
echo "aaa" >> output.txt;
done
Yet the second echo statement isn't executed. At all. What am I doing wrong?
I'm running this via
tail -f /var/log/somelog | ./script.sh
$f should not be empty. It's only supposed to output when tail notices a change in the file.
The variable $f is probably empty, and your script is hanging on a call to cat with no arguments. Did you want to say
while read f
instead of
while read $f
?