Extract part of file name with multiple sections - bash

I am trying to extract part of a file name to compare with other file names as it is the only part that does not change. here is the pattern and an example
clearinghouse.doctype.payer.transID.processID.date.time
EMDEON.270.60054.1234567890123456789.70949996.20120925.014606403
all sections are the same length at all times with the exception of clearinghouse & doctype that can vary in character length.
The part of the filename that i need for comparison is the transID.
What would be the cleanest shortest way to do this in a shell script.
Thanks

There are lots of ways to do this, the easiest tool for simple tasks is the cut command. Tell cut what character you want to use as a delemiter and which fields you want to print. Here is the command that does what you want.
file=EMDEON.270.60054.1234567890123456789.70949996.20120925.014606403
transitId=$(echo $file | cut -d. -f4)
Awk can do the same thing, and allows you do much more complicated logic as well.
file=EMDEON.270.60054.1234567890123456789.70949996.20120925.014606403
transitId=$(echo $file | awk -F. '{print $4}')

You can split the filename apart using the read command using an appropriate value
for IFS.
filename="EMDEON.270.60054.1234567890123456789.70949996.20120925.014606403"
IFS="." read clHouse doctype payer transID procID dt tm <<< "$filename"
echo $transID
Since you only want the transaction ID, it's overkill to assign every part to a specific variable. Use a single dummy variable for the other fields:
# You only need one variable after transID to swallow the rest of the input without
# splitting it up.
IFS="." read _ _ _ transID _ <<< "$filename"
or just read each part into a single array and access the proper element:
IFS="." read -a parts <<< "$filename"
transID="${parts[3]}"

You can do this with a parameter expansion:
$ foo=EMDEON.270.60054.1234567890123456789.70949996.20120925.014606403
$ bar=${foo%.[0-9]*.[0-9]*.[0-9]*}
$ echo "${bar##*.}"
1234567890123456789

tranid==`echo file_name|perl -F -ane 'print $F[3]'`

Related

How to use a csv file as input for basic arithmetic operations in bash

I've stored my data in neckrev_dim.csv file, structured like the following
subjectID,dim3,pixdim3
MR44825,405,0.625
I also have a seperate subjects.csv, just containing all the subjectIDs
MR44825
MR55843
Now I want to use this data in basic arithmetic operations using bash.
subjlist=subjects.csv
for subj in ` cat $subjlist `
do
dim3=$(grep -w '$subj' neckrev_dim.csv | cut -d ',' -f 2)
pixdim3=$(grep -w '$subj' neckrev_dim.csv | cut -d ',' -f 3)
total_length=$(($dim3*$pixdim3))
echo $total_length
done
This leads to the following error:
syntax error: operand expected (error token is "*")
I think the problem lies within the grep, but I can't figure it out.
Thanks in advance!
The main issue is that POSIX arithmetic does not support decimals, only integers.
You will have to use something else, like bc for non-integer arithmetic.
The other issue is that you are single-quoting $subj -- you should use double quotes so the variable gets expanded.
Try the following:
subjlist=subjects.csv
while read -r subj
do
dim3=$(grep -w "$subj" neckrev_dim.csv | cut -d ',' -f 2)
pixdim3=$(grep -w "$subj" neckrev_dim.csv | cut -d ',' -f 3)
echo "$dim3 * $pixdim3" | bc
done < "$subjlist"
Note, here bc is reading from standard input, so we just need to echo the arithmetic expression to bc.
You need to change the single quotes to double quotes around the $subj. Single quotes won't expand the variable.
The solution below is designed to work accurately and more generally for different types of key values and different CSV lines, avoiding some of the limitations and failure modes of the other solutions.
Description of the code
Using single key fields read one per line from file keys.txt, search for the key in the first field in a CSV file generic.csv and do some floating-point (non-integer) math on the numbers in the other fields.
Performance enhancements:
If $key selects a unique row in the file, change XexitX below to exit so that awk doesn't keep reading the rest of the file unnecessarily; otherwise, delete XexitX and it will do all the lines matching that key.
If generic.csv is a large file, then sort it and replace the awk line with the look --binary line. This will replace a linear search with a binary search. Make sure you sort the whole file:
sort -o generic.csv generic.csv
Limitations:
The $key key must not contain backslashes or double quotes in the awk version. This could be fixed using sed -e 's/\\/&&/g' -e 's/"/\\"/g' on the field. The look --binary version doesn't care.
The generic.csv file must use commas only, no "quoted" CSV fields. This means no fields may contain commas.
The look --binary version does key prefix matching on the CSV lines, so you can't have a key that is a prefix of another, e.g. keys ABC and AB aren't distinct. The awk version doesn't have this problem.
Advantages of this over other solutions:
Reads the CSV only once per key, not multiple times.
The $key is matched exactly on the first field and not on any fields that might be added to the rest of the CSV line - no false matches. (The look --binary version does do prefix matching, so you can't have a key that is a prefix of another.)
The key field is a text field, not a regular expression, so it may contain special characters without any need to worry about escaping regular expression metacharacters to avoid errors.
No need to use grep or cut to separate fields; only one pipe, not three.
Can easily scale up to huge CSV files by using look --binary instead of awk.
while read -r key ; do
# SEE NOTES: look --binary "$key" generic.csv \
awk -F, "\$1 == \"$key\" { print ; XexitX }" generic.csv \
| while IFS=, read -r key num1 num2 ; do
echo "$key: $(dc -e "$num1 $num2 * p")"
done
done <keys.txt

Utilising variables in tail command

I am trying to export characters from a reference file in which their byte position is known. To do this, I have a long list of numbers stored as a variable which have been used as the input to a tail command.
For example, the reference file looks like:
ggaaatgcattcaaacatgc
And the list looks like:
5
10
7
15
I have tried using this code:
list=$(<pos.txt)
echo "$list"
cat ref.txt | tail -c +"list" | head -c1 > out.txt
However, it keeps returning "invalid number of bytes: '+5\n10\n7\n15...'"
My expected output would be
a
t
g
a
...
Can anybody tell me what I'm doing wrong? Thanks!
It looks like you are trying to access your list variable in your tail command. You can access it like this: $list rather than just using quotes around it.
Your logic is flawed even after fixing the variable access. The list variable includes all lines of your list.txt file. Including the newline character \n which is invisible in many UIs and programs, but it is of course visible when you are manually reading single bytes. You need to feed the lines one by one to make it work properly.
Also unless those numbers are indexes from the end, you need to feed them to head instead of tail.
If I understood what you are attempting to do correctly, this should work:
while read line
do
head -c $line ref.txt | tail -c 1 >> out.txt
done < pos.txt
The reason for your command failure is simple. The variable list contains a multi-line string stored from the pos.txt files including newlines. You cannot pass not more than one integer value for the -c flag.
Your attempts can be fixed quite easily with removing calls to cat and using a temporary variable to hold the file content
while IFS= read -r lineNo; do
tail -c "$lineNo" ref.txt | head -c1
done < pos.txt
But then if your intentions is print the desired output in a new-line every time, head does not output that way. It just forms a string atga for your given input in a single line and not across multiple lines with one character at each line.
As Gordon mentions in one of the comments, for much more efficient FASTA files processing, you could just use one invocation of awk though (skipping multiple forks to head/tail). Your provided input does not involve any headers to skip which would be straightforward as
awk ' FNR==NR{ n = split($0,arr,""); for(i=1;i<=n;i++) hash[i] = arr[i] }
( $0 in hash ){ print hash[$0] } ' ref.txt pos.txt
You could use cut instead of tail:
pos=$(<pos.txt)
cut -c ${pos//$'\n'/,} --output-delimiter=$'\n' ref.txt
Or just awk:
awk -F '' 'NR==FNR{c[$0];next} {for(i in c) print $i}' pos.txt ref.txt
both yield:
a
g
t
a

Scripting username creation from text file?

I'm really new at Bash and scripting in general.
I have to create usernames formed of first letter of first name followed by last name. To do it, I use a provided text file that looks like this:
doe,john
smith,mike
...
I declared the following variables:
fname=$(cut -d, -f2 "file.txt" | cut -c1)
lname=$(cut -d, -f1 "file.txt")
But how do I put the elements together to form the names jdoe and msmith ? I tried the methods I know to concatenate strings and vriables, but nothing works..
I think I found a method using awk that is supposed to work, but is there any other way to "concatenate" the elements of 2 lists?
Thank you
There's a million ways to do it, this is simplest:
$ awk -F, '{print substr($2,1,1) $1}' file
jdoe
msmith
Ed Morton's awk-based answer is simplest (and probably fastest), but since you asked for a different solution:
#!/usr/bin/env bash
while IFS=, read -r last first _; do
username=${first:0:1}${last}
echo "username: $username"
done < file.txt
IFS=, read -r last first _ reads the first 2 ,-separated fields from each input line (_ is a dummy variable that receives the rest of the input line, if any; -r prevents interpretation of \ chars. in the input, which is usually what you want).
username=${first:0:1}${last} concatenates the 1st char. of variable $first's value with variable $last's value, simply by placing the two variable references next to each other.
${first:0:1} - extract 1 character from $first at position 0 - is an example of parameter expansion, specifically: substring expansion
< file.txt is an input redirection that sends file.txt's contents via stdin to the while loop.
This looks a bit too much like homework, so I'll just drop some hints.
To read the lastname and firstname into separate variables for each line of the file, see BashFAQ 1. It should not involve cut.
To grab the first character of a variable, see BashFAQ 100.

Grep outputs multiple lines, need while loop

I have a script which uses grep to find lines in a text file (ics calendar to be specific)
My script finds a date match, then goes up and down a few lines to copy the summary and start time of the appointment into a separate variable. The problem I have is that I'm going to have multiple appointments at the same time, and I need to run through the whole process for each result in grep.
Example:
LINE=`grep -F -n 20130304T232200 /path/to/calendar.ics | cut -f1 d:`
And it outputs only the lines, such as
86 89
Then it goes on to capture my other variables, as such:
SUMMARYLINE=$(( $LINE + 5 ))
SUMMARY:`sed -n "$SUMMARYLINE"p /path/to/calendar.ics
my script runs fine with one output, but it obviously won't work with more than 1 and I need for it to. should I send the grep results into an array? a separate text file to read from? I'm sure I'll need a while loop in here somehow. Need some help please.
You can call grep from a loop quite easily:
while IFS=':' read -r LINE notused # avoids the use of cut
do
# First field is now in $LINE
# Further processing
done < <(grep -F -n 20130304T232200 /path/to/calendar.ics)
However, if the file is not too large then it might be easier to read the whole file into an array and more around that.
With your proposed solution, you are reading through the file several times. Using awk, you can do it in one pass:
awk -F: -v time=20130304T232200 '
$1 == "SUMMARY" {summary = substr($0,9)}
/^DTSTART/ {start = $2}
/^END:VEVENT/ && start == time {print summary}
' calendar.ics

How to parse a CSV in a Bash script?

I am trying to parse a CSV containing potentially 100k+ lines. Here is the criteria I have:
The index of the identifier
The identifier value
I would like to retrieve all lines in the CSV that have the given value in the given index (delimited by commas).
Any ideas, taking in special consideration for performance?
As an alternative to cut- or awk-based one-liners, you could use the specialized csvtool aka ocaml-csv:
$ csvtool -t ',' col "$index" - < csvfile | grep "$value"
According to the docs, it handles escaping, quoting, etc.
See this youtube video: BASH scripting lesson 10 working with CSV files
CSV file:
Bob Brown;Manager;16581;Main
Sally Seaforth;Director;4678;HOME
Bash script:
#!/bin/bash
OLDIFS=$IFS
IFS=";"
while read user job uid location
do
echo -e "$user \
======================\n\
Role :\t $job\n\
ID :\t $uid\n\
SITE :\t $location\n"
done < $1
IFS=$OLDIFS
Output:
Bob Brown ======================
Role : Manager
ID : 16581
SITE : Main
Sally Seaforth ======================
Role : Director
ID : 4678
SITE : HOME
First prototype using plain old grep and cut:
grep "${VALUE}" inputfile.csv | cut -d, -f"${INDEX}"
If that's fast enough and gives the proper output, you're done.
CSV isn't quite that simple. Depending on the limits of the data you have, you might have to worry about quoted values (which may contain commas and newlines) and escaping quotes.
So if your data are restricted enough can get away with simple comma-splitting fine, shell script can do that easily. If, on the other hand, you need to parse CSV ‘properly’, bash would not be my first choice. Instead I'd look at a higher-level scripting language, for example Python with a csv.reader.
In a CSV file, each field is separated by a comma. The problem is, a field itself might have an embedded comma:
Name,Phone
"Woo, John",425-555-1212
You really need a library package that offer robust CSV support instead of relying on using comma as a field separator. I know that scripting languages such as Python has such support. However, I am comfortable with the Tcl scripting language so that is what I use. Here is a simple Tcl script which does what you are asking for:
#!/usr/bin/env tclsh
package require csv
package require Tclx
# Parse the command line parameters
lassign $argv fileName columnNumber expectedValue
# Subtract 1 from columnNumber because Tcl's list index starts with a
# zero instead of a one
incr columnNumber -1
for_file line $fileName {
set columns [csv::split $line]
set columnValue [lindex $columns $columnNumber]
if {$columnValue == $expectedValue} {
puts $line
}
}
Save this script to a file called csv.tcl and invoke it as:
$ tclsh csv.tcl filename indexNumber expectedValue
Explanation
The script reads the CSV file line by line and store the line in the variable $line, then it split each line into a list of columns (variable $columns). Next, it picks out the specified column and assigned it to the $columnValue variable. If there is a match, print out the original line.
Using awk:
export INDEX=2
export VALUE=bar
awk -F, '$'$INDEX' ~ /^'$VALUE'$/ {print}' inputfile.csv
Edit: As per Dennis Williamson's excellent comment, this could be much more cleanly (and safely) written by defining awk variables using the -v switch:
awk -F, -v index=$INDEX -v value=$VALUE '$index == value {print}' inputfile.csv
Jeez...with variables, and everything, awk is almost a real programming language...
For situations where the data does not contain any special characters, the solution suggested by Nate Kohl and ghostdog74 is good.
If the data contains commas or newlines inside the fields, awk may not properly count the field numbers and you'll get incorrect results.
You can still use awk, with some help from a program I wrote called csvquote (available at https://github.com/dbro/csvquote):
csvquote inputfile.csv | awk -F, -v index=$INDEX -v value=$VALUE '$index == value {print}' | csvquote -u
This program finds special characters inside quoted fields, and temporarily replaces them with nonprinting characters which won't confuse awk. Then they get restored after awk is done.
index=1
value=2
awk -F"," -v i=$index -v v=$value '$(i)==v' file
I was looking for an elegant solution that support quoting and wouldn't require installing anything fancy on my VMware vMA appliance. Turns out this simple python script does the trick! (I named the script csv2tsv.py, since it converts CSV into tab-separated values - TSV)
#!/usr/bin/env python
import sys, csv
with sys.stdin as f:
reader = csv.reader(f)
for row in reader:
for col in row:
print col+'\t',
print
Tab-separated values can be split easily with the cut command (no delimiter needs to be specified, tab is the default). Here's a sample usage/output:
> esxcli -h $VI_HOST --formatter=csv network vswitch standard list |csv2tsv.py|cut -f12
Uplinks
vmnic4,vmnic0,
vmnic5,vmnic1,
vmnic6,vmnic2,
In my scripts I'm actually going to parse tsv output line by line and use read or cut to get the fields I need.
Parsing CSV with primitive text-processing tools will fail on many types of CSV input.
xsv is a lovely and fast tool for doing this properly. To search for all records that contain the string "foo" in the third column:
cat file.csv | xsv search -s 3 foo
A sed or awk solution would probably be shorter, but here's one for Perl:
perl -F/,/ -ane 'print if $F[<INDEX>] eq "<VALUE>"`
where <INDEX> is 0-based (0 for first column, 1 for 2nd column, etc.)
Awk (gawk) actually provides extensions, one of which being csv processing.
Assuming that extension is installed, you can use awk to show all lines where a specific csv field matches 123.
Assuming test.csv contains the following:
Name,Phone
"Woo, John",425-555-1212
"James T. Kirk",123
The following will print all lines where the Phone (aka the second field) is equal to 123:
gawk -l csv 'csvsplit($0,a) && a[2] == 123 {print $0}'
The output is:
"James T. Kirk",123
How does it work?
-l csv asks gawk to load the csv extension by looking for it in $AWKLIBPATH;
csvsplit($0, a) splits the current line, and stores each field into a new array named a
&& a[2] == 123 checks that the second field is 123
if both conditions are true, it { print $0 }, aka prints the full line as requested.

Resources