I have a source file that is a combination of multiple files that have been merged together. My script is supposed to separate them into the original individual files.
Whenever I encounter a line that starts with "FILENM", that means that it's the start of the next file.
All of the detail lines in the files are fixed width; so, I'm currently encountering a problem where a line that starts with leading whitespaces is truncated when it's not supposed to be truncated.
How do I enhance this script to retain the leading whitespaces?
while read line
do
lineType=`echo $line | cut -c1-6`
if [ "$lineType" == "FILENM" ]; then
fileName=`echo $line | cut -c7-`
else
echo "$line" >> $filePath/$fileName
fi
done <$filePath/sourcefile
The leading spaces are removed because read splits the input into words. To counter this, set the IFS variable to empty string. Like this:
OLD_IFS="$IFS"
IFS=
while read line
do
...
done <$filePath/sourcefile
IFS="$OLD_IFS"
To preserve IFS variable you could write while in the following way:
while IFS= read line
do
. . .
done < file
Also to preserve backslashes use read -r option.
Related
I have a template script with some analysis and the only thing that I need to change in it is a case.
#!/bin/bash
CASE=XXX
... the rest of the script where I use $CASE
I created a list of all my cases, that I saved into file: list.txt.
So my list.txt file may contain cases as XXX, YYY, ZZZ.
Now I would run a loop over list.txt content and fill my template_script.sh with a case from the list.txt and then saved the file with a new name - script_CASE.sh
for case in `cat ./list.txt`;
do
# open template_script.sh
# use somehow the line from template_script.sh (maybe substitute CASE=$case)
# save template_script with a new name script_$case
done
In pure bash :
#!/bin/bash
while IFS= read -r casevalue; do
escaped=${casevalue//\'/\'\\\'\'} # escape single quotes if any
while IFS= read -r line; do
if [[ $line = CASE=* ]]; then
echo "CASE='$escaped'"
else
echo "$line"
fi
done < template_script.sh > "script_$casevalue"
done < list.txt
Note that saving to "script_$casevalue" may not work if the case contains a / character.
If it is guaranteed that case values (lines in list.txt) needn't to be escaped then using sed is simpler:
while IFS= read -r casevalue; do
sed -E "s/^CASE=(.*)/CASE=$casevalue/" template_script.sh > "script_$casevalue"
done < list.txt
But this approach is fragile and will fail, for instance, if a case value contains a & character. The pure bash version, I believe, is very robust.
Converting my comment to answer so that solution is easy to find for future visitors.
You may use this bash script:
while read -r c; do
sed "s/^CASE=.*/CASE=$c/" template_script.sh > "script_${c}.sh"
done < list.txt
The content of the script is:
#!/bin/bash
tempconf="/tmp/test.file"
while read line
do
echo $line
done < test.conf > $tempconf
The content of the test.conf is:
[PORT]
tcp_ports=7000-7200
udp_ports=7000-8000, 40000-49999
[H323H]
maxSendThreads=10
maxRecvThreads=10
[SDK]
appPwd=1111111
amsAddress=192.168.222.208:8888
The content of the output file "/tmp/test.file" is:
[PORT]
tcp_ports=7000-7200
udp_ports=7000-8000, 40000-49999
2
maxSendThreads=10
maxRecvThreads=10
[SDK]
appPwd=1111111
amsAddress=192.168.222.208:8888
The question is,why [H323H] turns out to be 2. I'll be appreciated if anyone can explain it to me.
[] has a special meaning for the shell, it just means "a single character taken from any of the characters between the brackets". So when you run
echo [H323H]
the shell looks for a file named or H, or 2, or 3... If at least one file matches, [H323H] is replaced with all the matching file names in the output; otherwise it's reproduced as is.
source: https://unix.stackexchange.com/a/259385
Using quotes around $line would solve your problem without the need to check for files matching those characters (which would make the script not very robust)
#!/bin/bash
tempconf="/tmp/test.file"
while read -r line
do
echo "$line"
done < test.conf > "$tempconf"
Description of Task & problem
I have dumped a list of files which match certain criteria to a text file. The command i used is:
find . -name "*.logic" | grep -v '.bak' | grep -v 'Project File Backup' > logic_manifest.txt
Filenames with spaces in are proving difficult to automatically open, such as :
./20160314 _ Pop/20160314 _ Pop.logic
I have replaced the spaces with '\ ' to escape them but the open command complains:
The file /Users/daniel/Music/Logic/20160314\ _\ Pop/20160314\ _\ Pop.logic does not exist.
When I copy that parsed path, type open in the terminal and paste it in, the file opens successfully.
My BASH script:
#!/bin/bash
clear
# file full of file paths, gathered using the find command
#logic_manifest.txt
# For keeping track of which line of the file I'm using
COUNTER=0
it=1
while IFS='' read -r line || [[ -n "$line" ]]; do
# Increment iterator
COUNTER=`expr $COUNTER + $it`
# replace spaces with a black-slash and space
line=${line// /<>}
line=${line//<>/'\ '}
# print the file name and the line it is on
echo "Line: $COUNTER $line"
#open the file
open "$line"
# await key press before moving on to next iterator
read input </dev/tty
done < "$1"
Encapsulating the filename in speech-marks has not helped
line=${line// /<>}
line=${line//<>/'\ '}
line="\"$line\""
The file /Users/daniel/Music/Logic/"./20160314\ _\ Pop/20160314\ _\
Pop.logic" does not exist.
Nor did passing "\${line}" to open
Question
What do I need to do to enable the open command to launch the files successfully?
Renaming the directories and filenames is not a viable option at this time.
Spaces in filenames are bad, I know, I put it down to moments of madness
There is absolutely no need whatsoever to replace any characters in line.
This simpler loop should open files just fine:
while IFS='' read -r line; do
((COUNTER++))
echo "Line: $COUNTER $line"
open "$line"
read input </dev/tty
done < "$1"
That's it. Moreover:
Spaces in filenames are bad, I know, I put it down to moments of madness.
There's nothing wrong with spaces in filenames.
You just have to use proper quoting, that's all.
That is, if the file names didn't have spaces and other special characters in them, then you could write open $line and it would work.
Since they contain spaces, you must enclose the variable in double-quotes, as in open "$line".
Actually it's strongly recommended to enclose variables in double-quotes when used in command line arguments.
I have a test.txt file which contains key value pair just like any other property file.
test.txt
Name="ABC"
Age="24"
Place="xyz"
i want to extract the value of different key's value into corresponding variables. For that i have written the following shell script
master.sh
file=test.txt
while read line; do
value1=`grep -i 'Name' $file|cut -f2 -d'=' $file`
value2=`grep -i 'Age' $file|cut -f2 -d'=' $file`
done <$file
but when i execute it; it doesnt run properly, giving me the entire line extracted by the grep part of the command as output. Can someone please point me to the error ?
If I understood your question correctly, the following Bash script should do the trick:
#!/bin/bash
IFS="="
while read k v ; do
test -z "$k" && continue # skip empty lines
declare $k=$v
done <test.txt
echo $Name
echo $Age
echo $Place
Why is that working? Most information can be retrieved from bash's man page:
IFS is the "Internal Field Separator" which is used by bash's 'read' command to separate fields in each line. By default, IFS separates along spaces, but it is redefined to separate along the equal sign. It is a bash-only solution similar to the 'cut' command, where you define the equal sign as delimiter ('-d =').
The 'read' builtin reads two fields from a line. As only two variables are provided (k and v), the first field ends up in k, all remaining fields (i.e. after the equal sign) end up in v.
As the comment states, empty lines are skipped, i.e. those where the k variable is emtpy (test -z).
'eval' is a bash builtin as well, which executes the arguments (but only after evaluating $k=$v), i.e. the eval statement becomes equivalent to Name="ABC" etc.
'<test.txt' after 'done' tells bash to read test.txt and to feed it line by line into the 'read' builtin further up.
The three 'echo' statements are simply to show that this solution did work.
The format or the file is valid sh syntax, so you could just source the file:
source test.txt
In any case, your code doesn't work because after the pipe you shouldn't specify the file again.
value1=$(grep -i 'Name' "$file" | cut -f2 -d'=')
would keep your logic
This is a comment, but the comment box does not allow formatting. Consider rewriting this:
while read line; do
value1=`grep -i 'Name' $file|cut -f2 -d'=' $file`
value2=`grep -i 'Age' $file|cut -f2 -d'=' $file`
done <$file
as:
while IFS== read key value; do
case $key in
Name|name) value1=$value;;
Age|age) value2=$value;;
esac;
done < $file
Parsing the line multiple times via cut is inefficient. This is slightly different than your version, since the comparison is case sensitive, but that is easily fixed if necessary. For example, you could preprocess the input file and convert everything to lower case. You can do the preprocessing on the fly, but be aware that this will put your while loop in a subprocess which will require some additional care (since the variable definitions will end with the pipeline), but that is not significant. But running the entire file through grep twice for each line of the file is O(n^2), and ghastly! (Why are you reading the entire file anyway instead of just echoing the line ?)
I have a text file having some names line by line.
I am reading this file through KornShell (ksh) and getting those names and performing some operations in loop.
I want to put some comment in the text file for readability (i.e., lines starting with # are comments an no need to read that).
So, what I want is to read the lines which are not starting with # symbol.
In ksh, I am reading like this:
while read base
do
---
---
done<file
I tried to use grep, but is is not working.
I want the correct syntax to achieve it in ksh.
You can do for example this (read.sh):
#!/bin/ksh
while read line
do
[[ $line = \#* ]] && continue
echo $line
done < read.sh
How about this (edited to include full code snippet):
while read base
do
# skip comments
[ -z "`echo $base | grep '^#'`" ] || continue
# handle remaining lines here
done<file
But the other answer contains a much more concise and ksh-ish solution.