Replace all control characters in a range of line with awk - bash

I got a file with several lines. Some of these lines contain LFs (0x0A) and CRs (0x0D), which I want to get removed. The point is, that I want to replace them with SPACE them only in a range of characters of every line, eg in a File:
30 30 30 30 30 30 30 30 30 30 **0D 0A** 30 30 0A; 0000000000..00
30 30 30 30 30 30 30 30 **0D 0A** 30 30 30 30 0A; 00000000..0000
I want to remove 0d, 0a from position 0 to 12 in every line of the file.
I got
awk '{l=substr($0,1,12);r=substr($0,13);gsub(/\x00-\1F/," ",l);print l r}' ${f} > ${f}.noLF
but this seems not to work. I guess substr stops at the first 0x0d.
Is there another solution?

awk '/\r$/ && length < 13 {sub(/\r$/,""); printf "%s ", $0; next} {print}' file

Here is something ugly that may work!
Save it as go
#!/bin/bash
while :
do
# Read 13 bytes from stdin, and replace carriage returns and linefeeds with spaces
dd bs=13 count=1 2>/dev/null | tr '\r\n' ' '
# Break out of loop if dd was not successful
[ ${PIPESTATUS[0]} -ne 0 ] && break
# Get rest of conventional line, breaking out of loop if EOF
read rest || break
echo $rest
done
It reads 13 bytes from your file and removes all carriage returns and linefeeds. Then it reads the rest of the conventional line and outputs that.
Use it like this:
chmod +x go
./go < yourfile
Example:
more file
q
wertyuiopqwertyuiop
qwerty
uiopqwertyuiop
./go < file
q wertyuiopqwertyuiop
qwerty uiopqwertyuiop
EDITED TO MATCH FURTHER QUESTIONS
#!/bin/bash
while :
do
# Read 13 bytes from stdin, and replace carriage returns and linefeeds with spaces
first13=$(dd bs=13 count=1 2>/dev/null)
ddexitstatus=$?
if [ echo $first13 | grep -q "^KT" ]; then
echo $first13
else
echo $first13 | tr '\r\n' ' '
fi
# Break out of loop if dd was not successful
[ $ddexitstatus -ne 0 ] && break
# Get rest of conventional line, breaking out of loop if EOF
read rest || break
echo $rest
done

Related

Is there a command for substituting a set of characters by a set of strings?

I'm would like to substitute a set of edit: single byte characters with a set of literal strings in a stream, without any constraint on the line size.
#!/bin/bash
for (( i = 1; i <= 0x7FFFFFFFFFFFFFFF; i++ ))
do
printf '\a,\b,\t,\v'
done |
chars_to_strings $'\a\b\t\v' '<bell>' '<backspace>' '<horizontal-tab>' '<vertical-tab>'
The expected output would be:
<bell>,<backspace>,<horizontal-tab>,<vertical-tab><bell>,<backspace>,<horizontal-tab>,<vertical-tab><bell>...
I can think of a bash function that would do that, something like:
chars_to_strings() {
local delim buffer
while true
do
delim=''
IFS='' read -r -d '.' -n 4096 buffer && (( ${#buffer} != 4096 )) && delim='.'
if [[ -n "${delim:+_}" ]] || [[ -n "${buffer:+_}" ]]
then
# Do the replacements in "$buffer"
# ...
printf "%s%s" "$buffer" "$delim"
else
break
fi
done
}
But I'm looking for a more efficient way, any thoughts?
Since you seem to be okay with using ANSI C quoting via $'...' strings, then maybe use sed?
sed $'s/\a/<bell>/g; s/\b/<backspace>/g; s/\t/<horizontal-tab>/g; s/\v/<vertical-tab>/g'
Or, via separate commands:
sed -e $'s/\a/<bell>/g' \
-e $'s/\b/<backspace>/g' \
-e $'s/\t/<horizontal-tab>/g' \
-e $'s/\v/<vertical-tab>/g'
Or, using awk, which replaces newline characters too (by customizing the Output Record Separator, i.e., the ORS variable):
$ printf '\a,\b,\t,\v\n' | awk -vORS='<newline>' '
{
gsub(/\a/, "<bell>")
gsub(/\b/, "<backspace>")
gsub(/\t/, "<horizontal-tab>")
gsub(/\v/, "<vertical-tab>")
print $0
}
'
<bell>,<backspace>,<horizontal-tab>,<vertical-tab><newline>
For a simple one-liner with reasonable portability, try Perl.
for (( i = 1; i <= 0x7FFFFFFFFFFFFFFF; i++ ))
do
printf '\a,\b,\t,\v'
done |
perl -pe 's/\a/<bell>/g;
s/\b/<backspace>/g;s/\t/<horizontal-tab>/g;s/\v/<vertical-tab>/g'
Perl internally does some intelligent optimizations so it's not encumbered by lines which are longer than its input buffer or whatever.
Perl by itself is not POSIX, of course; but it can be expected to be installed on any even remotely modern platform (short of perhaps embedded systems etc).
Assuming the overall objective is to provide the ability to process a stream of data in real time without having to wait for a EOL/End-of-buffer occurrence to trigger processing ...
A few items:
continue to use the while/read -n loop to read a chunk of data from the incoming stream and store in buffer variable
push the conversion code into something that's better suited to string manipulation (ie, something other than bash); for sake of discussion we'll choose awk
within the while/read -n loop printf "%s\n" "${buffer}" and pipe the output from the while loop into awk; NOTE: the key item is to introduce an explicit \n into the stream so as to trigger awk processing for each new 'line' of input; OP can decide if this additional \n must be distinguished from a \n occurring in the original stream of data
awk then parses each line of input as per the replacement logic, making sure to append anything leftover to the front of the next line of input (ie, for when the while/read -n breaks an item in the 'middle')
General idea:
chars_to_strings() {
while read -r -n 15 buffer # using '15' for demo purposes otherwise replace with '4096' or whatever OP wants
do
printf "%s\n" "${buffer}"
done | awk '{print NR,FNR,length($0)}' # replace 'print ...' with OP's replacement logic
}
Take for a test drive:
for (( i = 1; i <= 20; i++ ))
do
printf '\a,\b,\t,\v'
sleep 0.1 # add some delay to data being streamed to chars_to_strings()
done | chars_to_strings
1 1 15 # output starts printing right away
2 2 15 # instead of waiting for the 'for'
3 3 15 # loop to complete
4 4 15
5 5 13
6 6 15
7 7 15
8 8 15
9 9 15
A variation on this idea using a named pipe:
mkfifo /tmp/pipeX
sleep infinity > /tmp/pipeX # keep pipe open so awk does not exit
awk '{print NR,FNR,length($0)}' < /tmp/pipeX &
chars_to_strings() {
while read -r -n 15 buffer
do
printf "%s\n" "${buffer}"
done > /tmp/pipeX
}
Take for a test drive:
for (( i = 1; i <= 20; i++ ))
do
printf '\a,\b,\t,\v'
sleep 0.1
done | chars_to_strings
1 1 15 # output starts printing right away
2 2 15 # instead of waiting for the 'for'
3 3 15 # loop to complete
4 4 15
5 5 13
6 6 15
7 7 15
8 8 15
9 9 15
# kill background 'awk' and/or 'sleep infinity' when no longer needed
don't waste FS/OFS - use the built-in variables to take 2 out of the 5 needed :
echo $' \t abc xyz \t \a \n\n ' |
mawk 'gsub(/\7/, "<bell>", $!(NF = NF)) + gsub(/\10/,"<bs>") +\
gsub(/\11/,"<h-tab>")^_' OFS='<v-tab>' FS='\13' ORS='<newline>'
<h-tab> abc xyz <h-tab> <bell> <newline><newline> <newline>
To have NO constraint on the line length you could do something like this with GNU awk:
awk -v RS='.{1,100}' -v ORS= '{
$0 = RT
gsub(foo,bar)
print
}'
That will read and process the input 100 chars at a time no matter which chars are present, whether it has newlines or not, and even if the input was one multi-terabyte line.
Replace gsub(foo,bar) with whatever substitution(s) you have in mind, e.g.:
$ printf '\a,\b,\t,\v' |
awk -v RS='.{1,100}' -v ORS= '{
$0 = RT
gsub(/\a/,"<bell>")
gsub(/\b/,"<backspace>")
gsub(/\t/,"<horizontal-tab>")
gsub(/\v/,"<vertical-tab>")
print
}'
<bell>,<backspace>,<horizontal-tab>,<vertical-tab>
and of course it'd be trivial to pass a list of old and new strings to awk rather than hardcoding them, you'd just have to sanitize any regexp or backreference metachars before calling gsub().

How can a "grep | sed | awk" script merging line pairs be more cleanly implemented?

I have a little script to extract specific data and cleanup the output a little. It seems overly messy and i'm wondering if the script can be trimmed down a bit.
The input file contains of pairs of lines -- names, followed by numbers.
Line pairs where the numeric value is not between 80 and 199 should be discarded.
Pairs may sometimes, but will not always, be preceded or followed by blank lines, which should be ignored.
Example input file:
al12t5682-heapmemusage-latest.log
38
al12t5683-heapmemusage-latest.log
88
al12t5684-heapmemusage-latest.log
100
al12t5685-heapmemusage-latest.log
0
al12t5686-heapmemusage-latest.log
91
Example/wanted output:
al12t5683 88
al12t5684 100
al12t5686 91
Current script:
grep --no-group-separator -PxB1 '([8,9][0-9]|[1][0-9][0-9])' inputfile.txt \
| sed 's/-heapmemusage-latest.log//' \
| awk '{$1=$1;printf("%s ",$0)};NR%2==0{print ""}'
Extra input example
al14672-heapmemusage-latest.log
38
al14671-heapmemusage-latest.log
5
g4t5534-heapmemusage-latest.log
100
al1t0000-heapmemusage-latest.log
0
al1t5535-heapmemusage-latest.log
al1t4676-heapmemusage-latest.log
127
al1t4674-heapmemusage-latest.log
53
A1t5540-heapmemusage-latest.log
54
G4t9981-heapmemusage-latest.log
45
al1c4678-heapmemusage-latest.log
81
B4t8830-heapmemusage-latest.log
76
a1t0091-heapmemusage-latest.log
88
al1t4684-heapmemusage-latest.log
91
Extra Example expected output:
g4t5534 100
al1t4676 127
al1c4678 81
a1t0091 88
al1t4684 91
another awk
$ awk -F- 'NR%2{p=$1; next} 80<=$1 && $1<=199 {print p,$1}' file
al12t5683 88
al12t5684 100
al12t5686 91
UPDATE
for the empty line record delimiter
$ awk -v RS= '80<=$2 && $2<=199{sub(/-.*/,"",$1); print}' file
al12t5683 88
al12t5684 100
al12t5686 91
Consider implementing this in native bash, as in the following (which can be seen running with your sample input -- including sporadically-present blank lines -- at http://ideone.com/Qtfmrr):
#!/bin/bash
name=; number=
while IFS= read -r line; do
[[ $line ]] || continue # skip blank lines
[[ -z $name ]] && { name=$line; continue; } # first non-blank line becomes name
number=$line # second one becomes number
if (( number >= 80 && number < 200 )); then
name=${name%%-*} # prune everything after first "-"
printf '%s %s\n' "$name" "$number" # emit our output
fi
name=; number= # clear the variables
done <inputfile.txt
The above uses no external commands whatsoever -- so whereas it might be slower to run over large input than a well-implemented awk or perl script, it also has far shorter startup time since no interpreter other than the already-running shell is required.
See:
BashFAQ #1 - How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?, describing the while read idiom.
BashFAQ #100 - How do I do string manipulations in bash?; or The Bash-Hackers' Wiki on parameter expansion, describing how name=${name%%-*} works.
The Bash-Hackers' Wiki on arithmetic expressions, describing the (( ... )) syntax used for numeric comparisons.
perl -nle's/-.*//; $n=<>; print "$_ $n" if 80<=$n && $n<=199' inputfile.txt
With gnu sed
sed -E '
N
/\n[8-9][0-9]$/bA
/\n1[0-9]{2}$/!d
:A
s/([^-]*).*\n([0-9]+$)/\1 \2/
' infile

bash to identify and verify file headers

Using the tab-delimited file below I am trying to validate the header line 1 and then store that number in a variable $header to use in a couple of if statements. If $header equals 10 then file has expected number of fields, but if $header less than 10 file is missing header for: and the missing header fields are printed underneath. The bash seems close and if i use the awk by itself it seems to work perfectly, but I can not seem to use it in the if. Thank you :).
file.txt
Index Chr Start End Ref Alt Freq Qual Score Input
1 1 1 100 C - 1 GOOD 10 .
2 2 20 200 A C .002 STRAND BIAS 2 .
3 2 270 400 - GG .036 GOOD 6 .
file2.txt
Index Chr Start End Ref Alt Freq Qual Score
1 1 1 100 C - 1 GOOD 10
2 2 20 200 A C .002 STRAND BIAS 2
3 2 270 400 - GG .036 GOOD 6
bash
for f in /home/cmccabe/Desktop/validate/*.txt; do
bname=`basename $f`
pref=${bname%%.txt}
header=$(awk -F'\t' '{print NF, "fields detected in file and they are:" ORS $0; exit}') $f >> ${pref}_output # detect header row in file and store in header and write to output
if [[ $header == "10" ]]; then # display results
echo "file has expected number of fields" # file is validated for headers
else
echo "file is missing header for:" # missing header field ...in file not-validated
echo "$header"
fi # close if.... else
done >> ${pref}_output
desired output for file.txt
file has expected number of fields
desired output for file1.txt
file is missing header for:
Input
You can use awk if you like, but bash is more than capable of handling the first line fields comparison on its own. If you maintain an array of expected field names, you can then easily split the first line into fields, compare against the expected number of fields, and output the identity of the missing field if you read less than the expected number of fields from any given file.
The following is a short example that takes filenames as arguments (you need to take filenames from stdin for a large number of files, or use xargs, as required). The script simply reads the first line in each file, separates the line into fields, checks the field count, and outputs any missing fields in a short error message:
#!/bin/bash
declare -i header=10 ## header has 10 fields
## aray of field names (can be read from 1st file)
fields=( "Index"
"Chr"
"Start"
"End"
"Ref"
"Alt"
"Freq"
"Qual"
"Score"
"Input" )
for i in "$#"; do ## for each file given as argument
read -r line < "$i" ## read first line from file into 'line'
oldIFS="$IFS" ## save current Internal Field Separator (IFS)
IFS=$'\t' ## set IFS to word-split on '\t'
fldarray=( $line ); ## fill 'fldarray' with fields in line
IFS="$oldIFS" ## restore original IFS
nfields=${#fldarray[#]} ## get number of fields in 'line'
if (( nfields < header )) ## test against header
then
printf "error: only '%d' fields in file '%s'\nmissing:" "$nfields" "$i"
for j in "${fields[#]}" ## for each expected field
do ## check against those in line, if not present print
[[ $line =~ $j ]] || printf " %s" "$j"
done
printf "\n\n" ## tidy up with newlines
fi
done
Example Input
$ cat dat/hdr.txt
Index Chr Start End Ref Alt Freq Qual Score Input
1 1 1 100 C - 1 GOOD 10 .
2 2 20 200 A C .002 STRAND BIAS 2 .
3 2 270 400 - GG .036 GOOD 6 .
$ cat dat/hdr2.txt
Index Chr Start End Ref Alt Freq Qual Score
1 1 1 100 C - 1 GOOD 10
2 2 20 200 A C .002 STRAND BIAS 2
3 2 270 400 - GG .036 GOOD 6
$ cat dat/hdr3.txt
Index Chr Start End Alt Freq Qual Score Input
1 1 1 100 - 1 GOOD 10 .
2 2 20 200 C .002 STRAND BIAS 2 .
3 2 270 400 GG .036 GOOD 6 .
Example Use/Output
$ bash hdrfields.sh dat/hdr.txt dat/hdr2.txt dat/hdr3.txt
error: only '9' fields in file 'dat/hdr2.txt'
missing: Input
error: only '9' fields in file 'dat/hdr3.txt'
missing: Ref
Look things over, while awk can do many things bash cannot on its own, bash is more than capable with parsing text.
Here is one in GNU awk (nextfile):
$ awk '
FNR==NR {
for(n=1;n<=NF;n++)
a[$n]
nextfile
}
NF==(n-1) {
print FILENAME " file has expected number of fields"
nextfile
}
{
for(i=1;i<=NF;i++)
b[$i]
print FILENAME " is missing header for: "
for(i in a)
if(i in b==0)
print i
nextfile
}' file1 file1 file2
file1 file has expected number of fields
file2 is missing header for:
Input
The first file processed by the script defines the headers (in a) that the following files should have and compares them (in b) against it.
This piece of code will do exactly what you are asking. Let me know if it works for you.
for f in ./*.txt; do
[[ $( head -1 $f | awk '{ print NF}' ) -eq 10 ]] && echo "File $f has all the fields on its header" || echo "File $f is missing " $( echo "Index Chr Start End Ref Alt Freq Qual Score Input $( head -1 $f )" | tr ' ' '\n' | sort | uniq -c | awk '/1 / {print $2}' );
done
Output :
File ./file2.txt is missing Input
File ./file.txt has all the fields on its header

Bash: capture stdError from looped stream

I have a cpp executable (mycat) that continuously read a looped audio stream from the shared memory and pipes the data to stdOut and the metadata information to stdErr. mycat pipes out 12 lines for every entry of the audio stream, containing the metadata information, looking like this:
0x1 (TimeStamp) 12Bytes:2956 + 6793/(47999+1) (0.141521) delta= 0+ 1536/(47999+1) (0.032000) 2956.151418 -9.898ms 2016.04.04 16:06:37.700
0x4 (ReferenceTime) 12Bytes:2956 + 6156972/(26999999+1) (0.228036) delta= 0+ 1618519/(26999999+1) (0.059944) 2956.151426 76.610ms 2016.04.04 16:06:37.700
0x6 (ProcessDelay) 4Bytes: 64 (0x40)
0x7 (ClockAccuracy) 8Bytes: offset=0.000ppm (+-0.000ppm)
0xb (ClockId) 8Bytes: 01 00 00 00 42 22 01 00
0x20001 (SampleRate) 4Bytes: 48000 (0xbb80)
0x20002 (Channels) 4Bytes: 6 (0x6)
0x20003 (PcmLevel) 24Bytes: -21307 -20348 -31737 -42427 -28786 -26525
0x20004 (PcmPeak) 24Bytes: -14366 -13360 -25203 -39427 -19067 -21307
0x2000e (DolbyDpMetadata) 39352Bytes:
Linear Time: 2956 + 6793/(47999+1) (0.141521) delta= 0+ 1536/(47999+1) (0.032000)
2016.04.04 16:06:37.700 update: slot=0xe2840 validTo=0x3d1dd180 shmT=0x3d195200 (delta=294784) doffset=0xec2c0 msize=39552 dsize=18432 type=0x20001 (PCMS16) data bytes: df f4 f2 fc
What I want is a bash script that:
1) launch mycat eg. ./mycat shm_name > /dev/null.
2) reads stdErr from mycat till the 12th line no matter where it started.
2.1) Eventually store the 12 line into a variable (this is optional)
3) immedialy kills mycat after the 12th line, so that the bash script can continue without being annoyed by the outcoming stdError.
4) Read the value of the line "Channels" (in this case 6) and store it to a variable named "channels"
5) Read the value of the line "SampleRate" (in this case 48000) and store it the a variable named "rate"
is there a way to do it?
You can redirect stderr with mycat shm_name > /dev/null 2>/path/to/file. You can kill mycat with killall mycat when the time comes. For storing the variables, you want export channels =, same pattern with rate. You can find those with grep. I'm not sure how to wait for exactly twelve lines, though.
First, you need to redirect STDERR to STDOUT, and have STDOUT go to null.
mycat 2>&1 >/dev/null | parsing_script
Then you need a parsing script to collect your data
#!/bin/bash
declare -a data=()
count=1
while read line; do
data+=( "$line
")
((count++))
string=$(echo $line|sed 's/.*(\(.[A-Za-z]*\)).*/\1/')
case $string in
SampleRate) rate=$(echo $line|sed 's/.*: \(.[0-9]*\) .*/\1/')
;;
Channels) channels=$(echo $line|sed 's/.*: \(.[0-9]*\) .*/\1/')
;;
*) true;;
esac
if [[ $count -gt 12 ]]; then
break
fi
done <&0
echo $channels
echo $rate
I echo'd the values, but you can redirect them to a file, format them, etc. Also, ${data[#]} contains your metadata. Since you are piping, I'm reading in from STDIN, but it could be more robust if you want to tweak it.
I find out an elegant solution:
while read -r line; # Read mycat line by line
do
if echo "$line" | grep -qi "SampleRate"; then # Search for line containing string 'SampleRate'
rate="${line#*:}"; # Remove all charachters till ':'
rate="${rate%(*}"; # Remove all charachters from '('
rate="$(echo -e "${rate}" | tr -d '[[:space:]]')" # Remove all tralling withespace
elif echo "$line" | grep -qi "Channels"; then
channels="${line#*:}";
channels="${channels%(*}";
channels="$(echo -e "${channels}" | tr -d '[[:space:]]')"
fi
if [ -n "$channels" ] && [ -n "$rate" ]; then # If both variables 'channels' and 'rate' are not empty kill the while loop
break;
fi
done < <(./mycat $shm_name 2>&1 > /dev/null) # Mycat is processed into a while loop, standard Error is redirected to standard Output ( 2>&1 )
echo "Found datarate: $rate"
echo "Found channels: $channels"

split file into several sub files

The file I am working on looks like this
header
//
[25]:0.00843832,469:0.0109533):0.00657864,((((872:0.00120503,((980:0.0001);
[29]:((962:0.000580339,930:0.000580339):0.00543993);
absolute:
gthcont: 5 4 2 1 3 4 543 5 67 657 78 67 8 5645 6
01010010101010101010101010101011111100011
1111010010010101010101010111101000100000
00000000000000011001100101010010101011111
I need it to be split into four files. The first file is
[25]:0.00843832,469:0.0109533):0.00657864,((((872:0.00120503,((980:0.0001);
[29]:((962:0.000580339,930:0.000580339):0.00543993);
The second file has to be
5 4 2 1 3 4 543 5 67 657 78 67 8 5645 6
The next file has to be
01010010101010101010101010101011111100011
11110100100101010101010101111010001000001
00000000000000011001100101010010101011111
so the header and the // have to be excluded before the first file, the absolute: line should be removed and the gthcont: shoudl not pop up as well.
Ideally the script would just take the input name of the file and name the output as first_input, second_input and third_input...
the fourth file should have the numbers from within the brackets in the first file..in this case it woudl only be
25
29
so my current try ist
awk.awk
BEGIN{body=0}
!body && /^\/\/$/ {body=1}
body && /^\[/ {print > "first_"FILENAME}
body && /^pos/{$1="";print > "second_"FILENAME}
body && /^[01]+/ {print > "third_"FILENAME}
body && /^\[[0-9]+\]/ {
print > "first_"FILENAME
print substr($0, 2, index($0,"]")-2) > "fourth_"FILENAME
}
but is somehow duplicates the lines in the first file so it would be [25], [25], [29],[29]
Some very minor changes to your script produce the desired output:
!body && /^\/\/$/ {body=1}
body && sub(/^gthcont: */,"") {print > "second_"FILENAME}
body && /^[01]+/ {print > "third_"FILENAME}
body && /^\[[0-9]+\]/ {
print > "first_"FILENAME
print substr($0, 2, index($0,"]")-2) > "fourth_"FILENAME
}
The duplication problem was caused by the fact that you printed to the first file in two places.
I have used sub to remove the first part of the gthcont: line (and changed the pattern too). sub returns true if it makes any replacements, so you can use it as a test as well. The advantage of using a substitution rather than unsetting the first field is that you can also get rid of the leading white space from the line.
As pointed out in the comments, there is no need to initialise body, so I removed the BEGIN block too.
I would just use a shell function for this:
function split3 {
if [[ $# -ne 1 ]]; then echo 'split3: error: require 1 argument.' >&2; return 1; fi;
while read -r; do
line=$REPLY;
if [[ "$line" =~ ^\[([0-9]+)\]: ]]; then
echo "$line" >&3;
echo "${BASH_REMATCH[1]}" >&6;
elif [[ "$line" =~ ^gthcont: ]]; then
echo "${line#gthcont: }" >&4;
elif [[ "$line" =~ ^\s*[01]+\s*$ ]]; then
echo "$line" >&5;
fi;
done <"$1" 3>"first_$1" 4>"second_$1" 5>"third_$1" 6>"fourth_$1";
};
split3 input; echo $?;
## 0
cat first_input;
## [25]:0.00843832,469:0.0109533):0.00657864,((((872:0.00120503,((980:0.0001);
## [29]:((962:0.000580339,930:0.000580339):0.00543993);
cat second_input;
## 5 4 2 1 3 4 543 5 67 657 78 67 8 5645 6
cat third_input;
## 01010010101010101010101010101011111100011
## 1111010010010101010101010111101000100000
## 00000000000000011001100101010010101011111
cat fourth_input;
## 25
## 29

Resources