The below loop should read 1 line at a time to the veriable m. But it prints some junk value. Please help.
MyMachine:/u/home/Mohammed_Junaid> cat /tmp/F5
[10.222.73.99:22]
[10.000.73.99:22]
[10.111.73.99:22]
MyMachine:/u/home/Mohammed_Junaid>
MyMachine:/u/home/Mohammed_Junaid> for m in $(cat /tmp/F5); do echo $m;done
1
2
1
2
1
2
MyMachine:/u/home/Mohammed_Junaid>
Those strings are also valid glob-patterns. Because you're not quoting the $m variable, you're letting the shell perform filename expansion.
It happens that the string [10.222.73.99:22] is equivalent to the glob pattern [012739.:] which will match a filename of a single one of those characters, and it appears that you have a file named 1 and a file named 2 in your current directory.
Always quote your shell variables, and don't use for to iterate the lines of a file
while IFS= read -r m; do echo "$m"; done < /tmp/F5
Related
this is probably a very simple question. I looked at other answers but couldn't come up with a solution. I have a 365 line date file. file as below,
01-01-2000
02-01-2000
I need to read this file line by line and assign each day to a separate variable. like this,
d001=01-01-2000
d002=02-01-2000
I tried while read commands but couldn't get them to work.It takes a lot of time to shoot one by one. How can I do it quickly?
Trying to create named variable out of an associative array, is time waste and not supported de-facto. Better use this, using an associative array:
#!/bin/bash
declare -A array
while read -r line; do
printf -v key 'd%03d' $((++c))
array[$key]=$line
done < file
Output
for i in "${!array[#]}"; do echo "key=$i value=${array[$i]}"; done
key=d001 value=01-01-2000
key=d002 value=02-01-2000
Assumptions:
an array is acceptable
array index should start with 1
Sample input:
$ cat sample.dat
01-01-2000
02-01-2000
03-01-2000
04-01-2000
05-01-2000
One bash/mapfile option:
unset d # make sure variable is not currently in use
mapfile -t -O1 d < sample.dat # load each line from file into separate array location
This generates:
$ typeset -p d
declare -a d=([1]="01-01-2000" [2]="02-01-2000" [3]="03-01-2000" [4]="04-01-2000" [5]="05-01-2000")
$ for i in "${!d[#]}"; do echo "d[$i] = ${d[i]}"; done
d[1] = 01-01-2000
d[2] = 02-01-2000
d[3] = 03-01-2000
d[4] = 04-01-2000
d[5] = 05-01-2000
In OP's code, references to $d001 now become ${d[1]}.
A quick one-liner would be:
eval $(awk 'BEGIN{cnt=0}{printf "d%3.3d=\"%s\"\n",cnt,$0; cnt++}' your_file)
eval makes the shell variables known inside your script or shell. Use echo $d000 to show the first one of the newly defined variables. There should be no shell special characters (like * and $) inside your_file. Remove eval $() to see the result of the awk command. The \" quoted %s is to allow spaces in the variable values. If you don't have any spaces in your_file you can remove the \" before and after %s.
I'm a beginner in bash and here is my problem. I have a file just like this one:
Azzzezzzezzzezzz...
Bzzzezzzezzzezzz...
Czzzezzzezzzezzz...
I try in a script to edit this file.ABC letters are unique in all this file and there is only one per line.
I want to replace the first e of each line by a number who can be :
1 in line beginning with an A,
2 in line beginning with a B,
3 in line beginning with a C,
and I'd like to loop this in order to have this type of result
Azzz1zzz5zzz1zzz...
Bzzz2zzz4zzz5zzz...
Czzz3zzz6zzz3zzz...
All the numbers here are random int variables between 0 and 9. I really need to start by replacing 1,2,3 in first exec of my loop, then 5,4,6 then 1,5,3 and so on.
I tried this
sed "0,/e/s/e/$1/;0,/e/s/e/$2/;0,/e/s/e/$3/" /tmp/myfile
But the result was this (because I didn't specify the line)
Azzz1zzz2zzz3zzz...
Bzzzezzzezzzezzz...
Czzzezzzezzzezzz...
I noticed that doing sed -i "/A/ s/$/ezzz/" /tmp/myfile will add ezzz at the end of A line so I tried this
sed -i "/A/ 0,/e/s/e/$1/;/B/ 0,/e/s/e/$2/;/C/ 0,/e/s/e/$3/" /tmp/myfile
but it failed
sed: -e expression #1, char 5: unknown command: `0'
Here I'm lost.
I have in a variable (let's call it number_of_e_per_line) the number of e in either A, B or C line.
Thank you for the time you take for me.
Just apply s command on the line that matches A.
sed '
/^A/{ s/e/$1/; }
/^B/{ s/e/$2/; }
# or shorter
/^C/s/e/$3/
'
s command by default replaces the first occurrence. You can do for example s/s/$1/2 to replace the second occurrence, s/e/$1/g (like "Global") replaces all occurrences.
0,/e/ specifies a range of lines - it filters lines from the first up until a line that matches /e/.
sed is not part of Bash. It is a separate (crude) programming language and is a very standard command. See https://www.grymoire.com/Unix/Sed.html .
Continuing from the comment. sed is a poor choice here unless all your files can only have 3 lines. The reason is sed processes each line and has no way to keep a separate count for the occurrences of 'e'.
Instead, wrapping sed in a script and keeping track of the replacements allows you to handle any file no matter the number of lines. You just loop and handle the lines one at a time, e.g.
#!/bin/bash
[ -z "$1" ] && { ## valiate one argument for filename provided
printf "error: filename argument required.\nusage: %s filename\n" "./$1" >&2
exit 1
}
[ -s "$1" ] || { ## validate file exists and non-empty
printf "error: file not found or empty '%s'.\n" "$1"
exit 1
}
declare -i n=1 ## occurrence counter initialized 1
## loop reading each line
while read -r line || [ -n "$line" ]; do
[[ $line =~ ^.*e.*$ ]] || continue ## line has 'e' or get next
sed "s/e/1/$n" <<< "$line" ## substitute the 'n' occurence of 'e'
((n++)) ## increment counter
done < "$1"
Your data file having "..." at the end of each line suggests your files is larger than the snippet posted. If you have lines beginning 'A' - 'Z', you don't want to have to write 26 separate /match/s/find/replace/ substitutions. And if you have somewhere between 3 and 26 (or more), you don't want to have to rewrite a different sed expression for every new file you are faced with.
That's why I say sed is a poor choice. You really have no way to make the task a generic task with sed. The downside to using a script is it will become a poor choice as the number of records you need to process increase (over 100000 or so just due to efficiency)
Example Use/Output
With the script in replace-e-incremental.sh and your data in file, you would do:
$ bash replace-e-incremental.sh file
Azzz1zzzezzzezzz...
Bzzzezzz1zzzezzz...
Czzzezzzezzz1zzz...
To Modify file In-Place
Since you make multiple calls to sed here, you need to redirect the output of the file to a temporary file and then replace the original by overwriting it with the temp file, e.g.
$ bash replace-e-incremental.sh file > mytempfile && mv -f mytempfile file
$ cat file
Azzz1zzzezzzezzz...
Bzzzezzz1zzzezzz...
Czzzezzzezzz1zzz...
I had what I thought was a simple concept which I could easily do as I did something similar.
I have an input file input.csv
1a,1b
2a,2b
I would like the following output
Output file 1
This is variable 1 named 1a ok
This is variable 2 named 1b ok
Output file 2
This is variable 1 named 2a ok
This is variable 2 named 2b ok
I thought I could do something similar to below
i=1
while IFS=, read var1 var2; do
echo This is variable 1 named "var1" > filenamei
echo This is variable 2 named "var2" >> filenamei
i=i+1
done </inputfile.csv
I previously wrote code to take a single variable from a long file and write output to a single file and it worked fine. Like below
Input file
a
b
Single output file
This is A
This is B
Script was
while read p;do
echo this is "$p" >>output file
done < input file
Been through lots of different errors but getting nowhere.
It will be easy by configuring double loop: the outer loop to iterate over lines and the inner one for comma-separated fields. Then how about:
#!/bin/bash
i=1
while read -r line; do
ifs_back="$IFS"
IFS=","
set -- $line
for ((j=1; j<=$#; j++)); do
echo This is variable "$j" named "${!j}" >> "filename${i}"
done
IFS="$ifs_back"
i=$((i+1))
done < "inputfile.csv"
Explanations:
In order to split the input line with commas, we temporarily set IFS to "," then assign the fields to positional parameters $1, $2.
The loop counter j for the inner loop starts with 1 and ends with $#1, number of fields.
We can access the value of the positional parameter via ${!j}.
As a clean up of the inner loop, we retrieve IFS and increment i for the next line.
The code above is flexible with #lines and #fields so would work with the input:
1a,1b
2a,2b
3a,3b
as wel as with:
1a,1b,1c
2a,2b,2c
3a,3b,3c
Hope this helps.
I have a file - abc, which has the below content -
Bob 23
Jack 44
Rahul 36
I also have a shell script that do the addition of all the numbers here.
The specific line that picks up these numbers is -
while read line
do
num=echo ${line#* }
sum=`expr $sum + $num`
count=`expr $count + 1`
done< "$readfile"
I assumed that the code is just picking up the last field from file, but it's not. If i modify the file like
Bob 23 12
Jack 44 23
Rahul 36 34
The same script fails with syntax error.
NOTE: I know there are other ways to pick up the field value, but i would like to know how this works.
The syntax ${line#* } will skip the shortest string from the beginning till it finds a space and returns the rest. It worked fine when you had just 2 columns. But the same will not work when 3 columns are present as it will return you the last 2 column values which when you use it in the sum operator will throw you an error. To explain that, just imagine
str='foo bar'
printf '%s\n' "${str#* }"
bar
but imagine the same for 3 fields
str='foo bar foobar'
printf '%s\n' "${str#* }"
bar foobar
To fix that use the parameter expansion syntax of "${str##* }" to skip the longest sub-string from beginning. To fix your script for the example with 3 columns, I would use a script as below.
This does a simple input redirection on the file and uses the read command with the default IFS value which is a single white space. So I'm getting only the 3rd field on each line (even if it has multiple fields), the _ mark the fields I'm skipping. You could also have some variables as place-holders and use their value in the scripts also.
declare -i sum
while read -r _ _ value _ ; do
((sum+=value)
done < file
printf '%d\n' "$sum"
See Bash - Parameter Expansion (Substring removal) to understand more.
You could also use the PE syntax ${line##* } as below,
while read -r line ; do
((sum+=${line##* }))
done < file
[Not relevant to the current question]
If you just want the sum to be computed and not specifically worried about using bash script for this. You can use a simple Awk command to sum up values in 3rd column as
awk '{sum+=$3}END{print sum}' inputfile
I wrote 2 difference scripts but I am stuck at the same problem.
The problem is am making a table from a file ($2) that I get in args and $1 is the numbers of columns. A little bit hard to explain but I am gonna show you input and output.
The problem is now that I don't know how I can save every column now in a difference var so i can build it in my HTML code later
#printf #TR##TD#$...#/TD##TD#$...#/TD##TD#$..#/TD##/TR##TD#$...
so input look like that :
Name\tSize\tType\tprobe
bla\t4711\tfile\t888888888
abcde\t4096\tdirectory\t5555
eeeee\t333333\tblock\t6666
aaaaaa\t111111\tpackage\t7777
sssss\t44444\tfile\t8888
bbbbb\t22222\tfolder\t9999
Code :
c=1
column=$1
file=$2
echo "$( < $file)"| while read Line ; do
Name=$(sed "s/\\\t/ /g" $file | cut -d' ' -f$c,-$column)
printf "$Name \n"
#let c=c+1
#printf "<TR><TD>$Name</TD><TD>$Size</TD><TD>$Type</TD></TR>\n"
exit 0
done
Output:
Name Size Type probe
bla 4711 file 888888888
abcde 4096 directory 5555
eeeee 333333 block 6666
aaaaaa 111111 package 7777
sssss 44444 file 8888
bbbbb 22222 folder 9999
This is tailor-made job for awk. See this script:
awk -F'\t' '{printf "<tr>";for(i=1;i<=NF;i++) printf "<td>%s</td>", $i;print "</tr>"}' input
<tr><td>bla</td><td>4711</td><td>file</td><td>888888888</td></tr>
<tr><td>abcde</td><td>4096</td><td>directory</td><td>5555</td></tr>
<tr><td>eeeee</td><td>333333</td><td>block</td><td>6666</td></tr>
<tr><td>aaaaaa</td><td>111111</td><td>package</td><td>7777</td></tr>
<tr><td>sssss</td><td>44444</td><td>file</td><td>8888</td></tr>
<tr><td>bbbbb</td><td>22222</td><td>folder</td><td>9999</td></tr>
In bash:
celltype=th
while IFS=$'\t' read -a columns; do
rowcontents=$( printf '<%s>%s</%s>' "$celltype" "${columns[#]}" "$celltype" )
printf '<tr>%s</tr>\n' "$rowcontents"
celltype=td
done < <( sed $'s/\\\\t/\t/g' "$2")
Some explanations:
IFS=$'\t' read -a columns reads a line from standard input, using only the tab character to separate fields, and putting each field into a separate element of the array columns. We change IFS so that other whitespace, which could occur in a field, is not treated as a field delimiter.
On the first line read from standard input, <th> elements will be output by the printf line. After resetting the value of celltype at the end of the loop body, all subsequent rows will consist of <td> elements.
When setting the value of rowcontents, take advantage of the fact that the first argument is repeated as many times as necessary to consume all the arguments.
Input is via process substitution from the sed command, which requires a crazy amount of quoting. First, the entire argument is quoted with $'...', which tells bash to replace escaped characters. bash converts this to the literal string s/\\t/^T/g, where I am using ^T to represent a literal ASCII 09 tab character. When sed sees this argument, it performs its own escape replacement, so the search text is a literal backslash followed by a literal t, to be replaced by a literal tab character.
The first argument, the column count, is unnecessary and is ignored.
Normally, you avoid making the while loop part of a pipeline because you set parameters in the loop that you want to use later. Here, all the variables are truly local to the while loop, so you could avoid the process substitution and use a pipeline if you wish:
sed $'s/\\\\t/\t/g' "$2" | while IFS=$'\t' read -a columns; do
...
done