bash script increment number at end with zero prefix - bash

given a file with contents like this:
foo: 8.3.1
bar: 803001
I need a bash script to read this file only increment the last digit and consider the second line's last three digits as space for the the z in x.y.x to grow to three digits (and overwrite the original file with the new data:
Input 1:
foo: 8.3.1
bar: 803001
Output 1:
foo: 8.3.2
bar: 803002
Input 2:
foo: 8.3.9
bar: 803009
Output 2:
foo: 8.3.10
bar: 803010
Input 3:
foo: 8.3.199
bar: 803199
Output 3:
foo: 8.3.200
bar: 803200
I could do this in 2 seconds in java but I need to do it in a shell script or i'll face endless taunting from the build team.
Short of some rough string splitting any slick sep command would be a big help!

awk to the rescue!
awk -v d='.' '{n=split($2,v,d);
if (n>1) $2=v[1] d v[2] d v[3]+1;
else $2++}1' file
depending what else is in the file, you may need to qualify the replacement with a condition before the statement.

while read key val _; do
val_left="${val%.*}" val_right="${val##*.}"
printf '%s ' "$key"
[[ $val = *.* ]] && printf '%s.' "$val_left"
printf '%d\n' $(( 1+val_right ))
done

Bikeshedding and slightly golfing. Increment the number formed by the digits at the end of line:
perl -i -pe '/^(foo|bar):/ && s/\d+$/$&+1/e;' input

Related

Arithmetic operations using numbers from grep

I have FILE from which I can extract two numbers using grep. The numbers appear in the last column.
$ grep number FILE
number1: 123
number2: 456
I would like to assign the numbers to variables, e.g. $num1 and $num2, and do some arithmetic operations using the variables.
How can I do this using bash commands?
Assumptions:
we want to match on lines that start with the string number
we will always find 2 matches for ^number from the input file
not interested in storing values in an array
Sample data:
$ cat file.dat
number1: 123
not a number: abc
number: 456
We'll use awk to find the desired values and print all to a single line of output:
$ awk '/^number/ { printf "%s ",$2 }' file.dat
123 456
From here we can use read to load the variables:
$ read -r num1 num2 < <(awk '/^number/ { printf "%s ",$2 }' file.dat)
$ typeset -p num1 num2
declare -- num1="123"
declare -- num2="456"
$ echo ".${num1}.${num2}."
.123.456.
NOTE: periods added as visual delimiters
Firstly, you need to extract the numbers from the file. Assuming that the file is always in the format stated, then you can use a while loop, combined with the the read command to read the numbers into a named variable, one row at a time.
You can then use the $(( )) operator to perform integer arithmetic to keep a running total of the incoming numbers.
For example:
#!/bin/bash
declare -i total=0 # -i declares an integer.
while read discard number; do # read returns false at EOF. discard is ignored.
total=$((total+number)) # Variables don't need '$' prefix in this case.
done < FILE # while loop passes STDIN to the 'read' command.
echo "Total is: ${total}"

How can I assign each column value to Its name?

I have a MetaData.csv file that contains many values to perform an analysis. All I want are:
1- Reading column names and making variables similar to column names.
2- Put values in each column into variables as an integer that can be read by other commands. column_name=Its_value
MetaData.csv:
MAF,HWE,Geno_Missing,Inds_Missing
0.05,1E-06,0.01,0.01
I wrote the following codes but it doesn't work well:
#!/bin/bash
Col_Names=$(head -n 1 MetaData.csv) # Cut header (camma sep)
Col_Names=$(echo ${Col_Names//,/ }) # Convert header to space sep
Col_Names=($Col_Names) # Convert header to an array
for i in $(seq 1 ${#Col_Names[#]}); do
N="$(head -1 MetaData.csv | tr ',' '\n' | nl |grep -w
"${Col_Names[$i]}" | tr -d " " | awk -F " " '{print $1}')";
${Col_Names[$i]}="$(cat MetaData.csv | cut -d"," -f$N | sed '1d')";
done
Output:
HWE=1E-06: command not found
Geno_Missing=0.01: command not found
Inds_Missing=0.01: command not found
cut: 2: No such file or directory
cut: 3: No such file or directory
cut: 4: No such file or directory
=: command not found
Expected output:
MAF=0.05
HWE=1E-06
Geno_Missing=0.01
Inds_Missing=0.01
Problems:
1- I want to use array length (${#Col_Names[#]}) as the final iteration which is 5, but the array index start from 0 (0-4). So MAF column was not captured by the loop. Loop also iterate twice (once 0-4 and again 2-4!).
2- When I tried to call values in variables (echo $MAF), they were empty!
Any solution is really appreciated.
This produces the expected output you posted from the sample input you posted:
$ awk -F, -v OFS='=' 'NR==1{split($0,hdr); next} {for (i=1;i<=NF;i++) print hdr[i], $i}' MetaData.csv
MAF=0.05
HWE=1E-06
Geno_Missing=0.01
Inds_Missing=0.01
If that's not all you need then edit your question to clarify your requirements.
If I'm understanding your requirements correctly, would you please try something like:
#!/bin/bash
nr=1 # initialize input line number to 1
while IFS=, read -r -a ary; do # split the line on "," then assign "ary" to the fields
if (( nr == 1 )); then # handle the header line
col_names=("${ary[#]}") # assign column names
else # handle the body lines
for (( i = 0; i < ${#ary[#]}; i++ )); do
printf -v "${col_names[i]}" "${ary[i]}"
# assign the variable "${col_names[i]}" to the input field
done
# now you can access the values via its column name
echo "Fnames=$Fnames"
echo "MAF=$MAF"
fname_list+=("$Fnames") # create a list of Fnames
fi
(( nr++ )) # increment the input line number
done < MetaData.csv
echo "${fname_list[#]}" # print the list of Fnames
Output:
Fnames=19.vcf.gz
MAF=0.05
Fnames=20.vcf.gz
MAF=
Fnames=21.vcf.gz
MAF=
Fnames=22.vcf.gz
MAF=
19.vcf.gz 20.vcf.gz 21.vcf.gz 22.vcf.gz
The statetemt IFS=, read -a ary is mostly equivalent to your
first three lines; it splits the input on ",", and assigns the
array variable ary to the field values.
There are several ways to use a variable's value as a variable name
(Indirect Variable References). printf -v VarName Value is one of them.
[EDIT]
Based on the OP's updated input file, here is an another version:
#!/bin/bash
nr=1 # initialize input line number to 1
while IFS=, read -r -a ary; do # split the line on "," then assign "ary" to the fields
if (( nr == 1 )); then # handle the header line
col_names=("${ary[#]}") # assign column names
else # handle the body lines
for (( i = 0; i < ${#ary[#]}; i++ )); do
printf -v "${col_names[i]}" "${ary[i]}"
# assign the variable "${col_names[i]}" to the input field
done
fi
(( nr++ )) # increment the input line number
done < MetaData.csv
for n in "${col_names[#]}"; do # iterate over the variable names
echo "$n=${!n}" # print variable name and its value
done
# you can also specify the variable names literally as follows:
echo "MAF=$MAF HWE=$HWE Geno_Missing=$Geno_Missing Inds_Missing=$Inds_Missing"
Output:
MAF=0.05
HWE=1E-06
Geno_Missing=0.01
Inds_Missing=0.01
MAF=0.05 HWE=1E-06 Geno_Missing=0.01 Inds_Missing=0.01
As for the output, the first four lines are printed by echo "$n=${!n}" and the last line is printed by echo "MAF=$MAF ....
You can choose either statement depending on your usage of the variables in the following code.
I don't really think you can implement a robust CSV reader/parser in Bash, but you can implement it to work to some extent with simple CSV files. For example, a very simply bash-implemented CSV might look like this:
#!/bin/bash
set -e
ROW_NUMBER='0'
HEADERS=()
while IFS=',' read -ra ROW; do
if test "$ROW_NUMBER" == '0'; then
for (( I = 0; I < ${#ROW[#]}; I++ )); do
HEADERS["$I"]="${ROW[I]}"
done
else
declare -A DATA_ROW_MAP
for (( I = 0; I < ${#ROW[#]}; I++ )); do
DATA_ROW_MAP[${HEADERS["$I"]}]="${ROW[I]}"
done
# DEMO {
echo -e "${DATA_ROW_MAP['Fnames']}\t${DATA_ROW_MAP['Inds_Missing']}"
# } DEMO
unset DATA_ROW_MAP
fi
ROW_NUMBER=$((ROW_NUMBER + 1))
done
Note that is has multiple disadvantages:
it only works with ,-separated fields (truly "C"SV);
it cannot handle multiline records;
it cannot handle field escapes;
it considers the first row always represents a header row.
This is why many commands may produce and consume \0-delimited data just because this control character may be easier to use. Now what I'm not sure about is whether test is the only external command executed by bash (I believe it is, but it can be probably re-implemented using case so that no external test is executed?).
Example of use (with the demo output):
./read-csv.sh < MetaData.csv
19.vcf.gz 0.01
20.vcf.gz
21.vcf.gz
22.vcf.gz
I wouldn't recommend using this parser at all, but would recommend using a more CSV-oriented tool (Python would probably be the easiest choice to use; + or if your favorite language, as you mentioned, is R, then probably this is another option for you: Run R script from command line ).

How to find the next occurrence of a string in a file starting from $x line number.

In the following example, I have a variable $line set to "14" which relates to the 14th line of the file which is "four".
I need to find a way in BASH to start at whichever line number is set in ($line) (14) and find the line number of the next occurrence of a string (foo) in the same file. In this case , the result would be line 16.
one
foo
bar
foo
two
foo
three
foo
foo
bar
foo
foo
foo
four
bar
foo
bar
five
foo
six
foo
foo
bar
foo
$line = "14"
$search = "foo"
Lots of ways to do this. But if you want to do it in bash alone (i.e. no awk, perl, etc), here's one way:
$ mapfile -t -O 1 a < ttt
$ for ((i=$line; i<=${#a[#]}; i++)); do [[ "${a[$i]}" = "$string" ]] && echo "$i" && break; done
The mapfile command sucks your file into an array (in a way that performs better than read in a loop), and the -O 1 causes it to start numbering array indices at 1 instead of 0. The for loop steps through the array, starting at $line, with a [[ that compares the current array value with $string.
I'd still love to see what solution you came up with, and help you understand why it didn't work.
Try awk:
line=14
search=foo
awk '(NR > n && $0 == s){ print NR; exit }' n=$line s="$search" file
Output:
16
You can assign the result to a variable this way:
declare -i var=$(awk '(NR > n && $0 == s){ print NR; exit }' n=$line s="$search" file)
sed one-liner -
Using your data, naming it "infile", I get
$: line=14 search=foo sed -n $line,\${/$search/\{=\;q}} infile
16
This sets variables for the env before executing sed, and tells it not to output anything unasked with -n.
the command $line,\${/$search/\{=\;q}} evaluates to 14,${/foo/{=;q}}
That means "from line 14 to the end, find the next line that has "foo" in it, and then
print the line number (=) and
quit (q) without processing the rest of the file.

Unix one-liner to swap/transpose two lines in multiple text files?

I wish to swap or transpose pairs of lines according to their line-numbers (e.g., switching the positions of lines 10 and 15) in multiple text files using a UNIX tool such as sed or awk.
For example, I believe this sed command should swap lines 14 and 26 in a single file:
sed -n '14p' infile_name > outfile_name
sed -n '26p' infile_name >> outfile_name
How can this be extended to work on multiple files? Any one-liner solutions welcome.
If you want to edit a file, you can use ed, the standard editor. Your task is rather easy in ed:
printf '%s\n' 14m26 26-m14- w q | ed -s file
How does it work?
14m26 tells ed to take line #14 and move it after line #26
26-m14- tells ed to take the line before line #26 (which is your original line #26) and move it after line preceding line #14 (which is where your line #14 originally was)
w tells ed to write the file
q tells ed to quit.
If your numbers are in a variable, you can do:
linea=14
lineb=26
{
printf '%dm%d\n' "$linea" "$lineb"
printf '%d-m%d-\n' "$lineb" "$linea"
printf '%s\n' w q
} | ed -s file
or something similar. Make sure that linea<lineb.
If you want robust in-place updating of your input files, use gniourf_gniourf's excellent ed-based answer
If you have GNU sed and want to in-place updating with multiple files at once, use
#potong's excellent GNU sed-based answer (see below for a portable alternative, and the bottom for an explanation)
Note: ed truly updates the existing file, whereas sed's -i option creates a temporary file behind the scenes, which then replaces the original - while typically not an issue, this can have undesired side effects, most notably, replacing a symlink with a regular file (by contrast, file permissions are correctly preserved).
Below are POSIX-compliant shell functions that wrap both answers.
Stdin/stdout processing, based on #potong's excellent answer:
POSIX sed doesn't support -i for in-place updating.
It also doesn't support using \n inside a character class, so [^\n] must be replaced with a cumbersome workaround that positively defines all character except \n that can occur on a line - this is a achieved with a character class combining printable characters with all (ASCII) control characters other than \n included as literals (via a command substitution using printf).
Also note the need to split the sed script into two -e options, because POSIX sed requires that a branching command (b, in this case) be terminated with either an actual newline or continuation in a separate -e option.
# SYNOPSIS
# swapLines lineNum1 lineNum2
swapLines() {
[ "$1" -ge 1 ] || { printf "ARGUMENT ERROR: Line numbers must be decimal integers >= 1.\n" >&2; return 2; }
[ "$1" -le "$2" ] || { printf "ARGUMENT ERROR: The first line number ($1) must be <= the second ($2).\n" >&2; return 2; }
sed -e "$1"','"$2"'!b' -e ''"$1"'h;'"$1"'!H;'"$2"'!d;x;s/^\([[:print:]'"$(printf '\001\002\003\004\005\006\007\010\011\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037\177')"']*\)\(.*\n\)\(.*\)/\3\2\1/'
}
Example:
$ printf 'line 1\nline 2\nline 3\n' | swapLines 1 3
line 3
line 2
line 1
In-place updating, based on gniourf_gniourf's excellent answer:
Small caveats:
While ed is a POSIX utility, it doesn't come preinstalled on all platforms, notably not on Debian and the Cygwin and MSYS Unix-emulation environments for Windows.
ed always reads the input file as a whole into memory.
# SYNOPSIS
# swapFileLines lineNum1 lineNum2 file
swapFileLines() {
[ "$1" -ge 1 ] || { printf "ARGUMENT ERROR: Line numbers must be decimal integers >= 1.\n" >&2; return 2; }
[ "$1" -le "$2" ] || { printf "ARGUMENT ERROR: The first line number ($1) must be <= the second ($2).\n" >&2; return 2; }
ed -s "$3" <<EOF
H
$1m$2
$2-m$1-
w
EOF
}
Example:
$ printf 'line 1\nline 2\nline 3\n' > file
$ swapFileLines 1 3 file
$ cat file
line 3
line 2
line 1
An explanation of #potong's GNU sed-based answer:
His command swaps lines 10 and 15:
sed -ri '10,15!b;10h;10!H;15!d;x;s/^([^\n]*)(.*\n)(.*)/\3\2\1/' f1 f2 fn
-r activates support for extended regular expressions; here, notably, it allows use of unescaped parentheses to form capture groups.
-i specifies that the files specified as operands (f1, f2, fn) be updated in place, without backup, since no optional suffix for a backup file is adjoined to the -i option.
10,15!b means that all lines that do not (!) fall into the range of lines 10 through 15 should branch (b) implicitly to the end of the script (given that no target-label name follows b), which means that the following commands are skipped for these lines. Effectively, they are simply printed as is.
10h copies (h) line number 10 (the start of the range) to the so-called hold space, which is an auxiliary buffer.
10!H appends (H) every line that is not line 10 - which in this case implies lines 11 through 15 - to the hold space.
15!d deletes (d) every line that is not line 15 (here, lines 10 through 14) and branches to the end of the script (skips remaining commands). By deleting these lines, they are not printed.
x, which is executed only for line 15 (the end of the range), replaces the so-called pattern space with the contents of the hold space, which at that point holds all lines in the range (10 through 15); the pattern space is the buffer on which sed commands operate, and whose contents are printed by default (unless -n was specified).
s/^([^\n]*)(.*\n)(.*)/\3\2\1/ then uses capture groups (parenthesized subexpressions of the regular expression that forms the first argument passed to function s) to partition the contents of the pattern space into the 1st line (^([^\n]*)), the middle lines ((.*\n)), and the last line ((.*)), and then, in the replacement string (the second argument passed to function s), uses backreferences to place the last line (\3) before the middle lines (\2), followed by the first line (\1), effectively swapping the first and last lines in the range. Finally, the modified pattern space is printed.
As you can see, only the range of lines spanning the two lines to swap is held in memory, whereas all other lines are passed through individually, which makes this approach memory-efficient.
This might work for you (GNU sed):
sed -ri '10,15!b;10h;10!H;15!d;x;s/^([^\n]*)(.*\n)(.*)/\3\2\1/' f1 f2 fn
This stores a range of lines in the hold space and then swaps the first and last lines following the completion of the range.
The i flag edits each file (f1,f2 ... fn) in place.
With GNU awk:
awk '
FNR==NR {if(FNR==14) x=$0;if(FNR==26) y=$0;next}
FNR==14 {$0=y} FNR==26 {$0=x} {print}
' file file > file_with_swap
The use of the following helper script allows using the power of find ... -exec ./script '{}' l1 l2 \; to locate the target files and to swap lines l1 & l2 in each file in place. (it requires that there are no identical duplicate lines within the file that fall within the search range) The script uses sed to read the two swap lines from each file into an indexed array and passes the lines to sed to complete the swap by matching. The sed call uses its "matched first address" state to limit the second expression swap to the first occurrence. An example use of the helper script below to swap lines 5 & 15 in all matching files is:
find . -maxdepth 1 -type f -name "lnum*" -exec ../swaplines.sh '{}' 5 15 \;
For example, the find call above found files lnumorig.txt and lnumfile.txt in the present directory originally containing:
$ head -n20 lnumfile.txt.bak
1 A simple line of test in a text file.
2 A simple line of test in a text file.
3 A simple line of test in a text file.
4 A simple line of test in a text file.
5 A simple line of test in a text file.
6 A simple line of test in a text file.
<snip>
14 A simple line of test in a text file.
15 A simple line of test in a text file.
16 A simple line of test in a text file.
17 A simple line of test in a text file.
18 A simple line of test in a text file.
19 A simple line of test in a text file.
20 A simple line of test in a text file.
And swapped the lines 5 & 15 as intended:
$ head -n20 lnumfile.txt
1 A simple line of test in a text file.
2 A simple line of test in a text file.
3 A simple line of test in a text file.
4 A simple line of test in a text file.
15 A simple line of test in a text file.
6 A simple line of test in a text file.
<snip>
14 A simple line of test in a text file.
5 A simple line of test in a text file.
16 A simple line of test in a text file.
17 A simple line of test in a text file.
18 A simple line of test in a text file.
19 A simple line of test in a text file.
20 A simple line of test in a text file.
The helper script itself is:
#!/bin/bash
[ -z $1 ] && { # validate requierd input (defaults set below)
printf "error: insufficient input calling '%s'. usage: file [line1 line2]\n" "${0//*\//}" 1>&2
exit 1
}
l1=${2:-10} # default/initialize line numbers to swap
l2=${3:-15}
while IFS=$'\n' read -r line; do # read lines to swap into indexed array
a+=( "$line" );
done <<<"$(sed -n $((l1))p "$1" && sed -n $((l2))p "$1")"
((${#a[#]} < 2)) && { # validate 2 lines read
printf "error: requested lines '%d & %d' not found in file '%s'\n" $l1 $l2 "$1"
exit 1
}
# swap lines in place with sed (remove .bak for no backups)
sed -i.bak -e "s/${a[1]}/${a[0]}/" -e "0,/${a[0]}/s/${a[0]}/${a[1]}/" "$1"
exit 0
Even though I didn't manage to get it all done in a one-liner I decided it was worth posting in case you can make some use of it or take ideas from it. Note: if you do make use of it, test to your satisfaction before turning it loose on your system. The script currently uses sed -i.bak ... to create backups of the files changed for testing purposes. You can remove the .bak when you are satisfied it meets your needs.
If you have no use for setting default lines to swap in the helper script itself, then I would change the first validation check to [ -z $1 -o -z $2 -o $3 ] to insure all required arguments are given when the script is called.
While it does identify the lines to be swapped by number, it relies on the direct match of each line to accomplish the swap. This means that any identical duplicate lines up to the end of the swap range will cause an unintended match and failue to swap the intended lines. This is part of the limitation imposed by not storing each line within the range of lines to be swapped as discussed in the comments. It's a tradeoff. There are many, many ways to approach this, all will have their benefits and drawbacks. Let me know if you have any questions.
Brute Force Method
Per your comment, I revised the helper script to use the brute forth copy/swap method that would eliminate the problem of any duplicate lines in the search range. This helper obtains the lines via sed as in the original, but then reads all lines from file to tmpfile swapping the appropriately numbered lines when encountered. After the tmpfile is filled, it is copied to the original file and tmpfile is removed.
#!/bin/bash
[ -z $1 ] && { # validate requierd input (defaults set below)
printf "error: insufficient input calling '%s'. usage: file [line1 line2]\n" "${0//*\//}" 1>&2
exit 1
}
l1=${2:-10} # default/initialize line numbers to swap
l2=${3:-15}
while IFS=$'\n' read -r line; do # read lines to swap into indexed array
a+=( "$line" );
done <<<"$(sed -n $((l1))p "$1" && sed -n $((l2))p "$1")"
((${#a[#]} < 2)) && { # validate 2 lines read
printf "error: requested lines '%d & %d' not found in file '%s'\n" $l1 $l2 "$1"
exit 1
}
# create tmpfile, set trap, truncate
fn="$1"
rmtemp () { cp "$tmpfn" "$fn"; rm -f "$tmpfn"; }
trap rmtemp SIGTERM SIGINT EXIT
declare -i n=1
tmpfn="$(mktemp swap_XXX)"
:> "$tmpfn"
# swap lines in place with a tmpfile
while IFS=$'\n' read -r line; do
if ((n == l1)); then
printf "%s\n" "${a[1]}" >> "$tmpfn"
elif ((n == l2)); then
printf "%s\n" "${a[0]}" >> "$tmpfn"
else
printf "%s\n" "$line" >> "$tmpfn"
fi
((n++))
done < "$fn"
exit 0
If the line numbers to be swapped are fixed then you might want to try something like the sed command in the following example to have lines swapped in multiple files in-place:
#!/bin/bash
# prep test files
for f in a b c ; do
( for i in {1..30} ; do echo $f$i ; done ) > /tmp/$f
done
sed -i -s -e '14 {h;d}' -e '15 {N;N;N;N;N;N;N;N;N;N;G;x;d}' -e '26 G' /tmp/{a,b,c}
# -i: inplace editing
# -s: treat each input file separately
# 14 {h;d} # first swap line: hold ; suppress
# 15 {N;N;...;G;x;d} # lines between: collect, append held line; hold result; suppress
# 26 G # second swap line: append held lines (and output them all)
# dump test files
cat /tmp/{a,b,c}
(This is according to Etan Reisner's comment.)
If you want to swap two lines, you can send it through twice, you could make it loop in one sed script if you really wanted, but this works:
e.g.
test.txt: for a in {1..10}; do echo "this is line $a"; done >> test.txt
this is line 1
this is line 2
this is line 3
this is line 4
this is line 5
this is line 6
this is line 7
this is line 8
this is line 9
this is line 10
Then to swap lines 6 and 9:
sed ':a;6,8{6h;6!H;d;ba};9{p;x};' test.txt | sed '7{h;d};9{p;x}'
this is line 1
this is line 2
this is line 3
this is line 4
this is line 5
this is line 9
this is line 7
this is line 8
this is line 6
this is line 10
In the first sed it builds up the hold space with lines 6 through 8.
At line 9 it prints line 9 then prints the hold space (lines 6 through 8) this accomplishes the first move of 9 to place 6. Note: 6h; 6!H avoids a new line at the top of the pattern space.
The second move occurs in the second sed script it saves line 7 to the hold space, then deletes it and prints it after line 9.
To make it quasi-generic you can use variables like this:
A=3 && B=7 && sed ':a;'${A}','$((${B}-1))'{'${A}'h;'${A}'!H;d;ba};'${B}'{p;x};' test.txt | sed $(($A+1))'{h;d};'${B}'{p;x}'
Where A and B are the lines you want to swap, in this case lines 3 and 7.
if, you want swap two lines, to create script "swap.sh"
#!/bin/sh
sed -n "1,$((${2}-1))p" "$1"
sed -n "${3}p" "$1"
sed -n "$((${2}+1)),$((${3}-1))p" "$1"
sed -n "${2}p" "$1"
sed -n "$((${3}+1)),\$p" "$1"
next
sh swap.sh infile_name 14 26 > outfile_name

Bash - extracting a string between two points

For example:
((
extract everything here, ignore the rest
))
I know how to ignore everything within, but I don't know how to do the opposite. Basically, it'll be a file and it needs to extract the data between the two points and then output it to another file. I've tried countless approaches, and all seem to tell me the indentation I'm stating doesn't exist in the file, when it does.
If somebody could point me in the right direction, I'd be grateful.
If your data are "line oriented", so the marker is alone (as in the example), you can try some of the following:
function getdata() {
cat - <<EOF
before
((
extract everything here, ignore the rest
someother text
))
after
EOF
}
echo "sed - with two seds"
getdata | sed -n '/((/,/))/p' | sed '1d;$d'
echo "Another sed solution"
getdata | sed -n '1,/((/d; /))/,$d;p'
echo "With GNU sed"
getdata | gsed -n '/((/{:a;n;/))/b;p;ba}'
echo "With perl"
getdata | perl -0777 -pe "s/.*\(\(\s*\\n(.*)?\)\).*/\$1/s"
Ps: yes, its looks like a dance of crazy toothpicks
Assuming you want to extract the string inside (( and )):
VAR="abc((def))ghi"
echo "$VAR"
VAR=${VAR##*((}
VAR=${VAR%%))*}
echo "$VAR"
## cuts away the longest string from the beginning; # cuts away the shortest string from the beginning; %% cuts away the longest string at the end; % cuts away the shortes string at the end
The file :
$ cat /tmp/l
((
extract everything here, ignore the rest
someother text
))
The script
$ awk '$1=="((" {p=1;next} $1=="))" {p=o;next} p' /tmp/l
extract everything here, ignore the rest
someother text
sed -n '/^((/,/^))/ { /^((/b; /^))/b; p }'
Brief explanation:
/^((/,/^))/: range addressing (inclusive)
{ /^((/b; /^))/b; p }: sequence of 3 commands
1. skip line with ^((
2. skip line with ^))
3. print
The line skipping is required to make the range selection exclusive.

Resources