Say I have a string in bash -
NAMES="file1 file2 file3"
How do I map it to the following string which I will then use as part of a command?
MAPPED="-i file1.txt -i file2.txt -i file3.txt"
For an example of exactly what I mean, here's the equivalent python code -
names = "file1 file2 file3"
mapped = ' '.join("-i " + x + ".txt" for x in names.split())
You should use arrays instead of strings:
names=(file1 file2 file3)
# Add suffix
names=("${names[#]/%/.txt}")
# Build new array with "-i" elements
for name in "${names[#]}"; do
mapped+=(-i "$name")
done
# Show result
declare -p mapped
resulting in this output:
declare -a mapped=([0]="-i" [1]="file1.txt" [2]="-i" [3]="file2.txt" [4]="-i" [5]="file3.txt")
This can now be used in commands like this:
cmd "${mapped[#]}"
See BashFAQ/050 regarding the rationale behind putting commands into strings vs. arrays.
Related
this is probably a very simple question. I looked at other answers but couldn't come up with a solution. I have a 365 line date file. file as below,
01-01-2000
02-01-2000
I need to read this file line by line and assign each day to a separate variable. like this,
d001=01-01-2000
d002=02-01-2000
I tried while read commands but couldn't get them to work.It takes a lot of time to shoot one by one. How can I do it quickly?
Trying to create named variable out of an associative array, is time waste and not supported de-facto. Better use this, using an associative array:
#!/bin/bash
declare -A array
while read -r line; do
printf -v key 'd%03d' $((++c))
array[$key]=$line
done < file
Output
for i in "${!array[#]}"; do echo "key=$i value=${array[$i]}"; done
key=d001 value=01-01-2000
key=d002 value=02-01-2000
Assumptions:
an array is acceptable
array index should start with 1
Sample input:
$ cat sample.dat
01-01-2000
02-01-2000
03-01-2000
04-01-2000
05-01-2000
One bash/mapfile option:
unset d # make sure variable is not currently in use
mapfile -t -O1 d < sample.dat # load each line from file into separate array location
This generates:
$ typeset -p d
declare -a d=([1]="01-01-2000" [2]="02-01-2000" [3]="03-01-2000" [4]="04-01-2000" [5]="05-01-2000")
$ for i in "${!d[#]}"; do echo "d[$i] = ${d[i]}"; done
d[1] = 01-01-2000
d[2] = 02-01-2000
d[3] = 03-01-2000
d[4] = 04-01-2000
d[5] = 05-01-2000
In OP's code, references to $d001 now become ${d[1]}.
A quick one-liner would be:
eval $(awk 'BEGIN{cnt=0}{printf "d%3.3d=\"%s\"\n",cnt,$0; cnt++}' your_file)
eval makes the shell variables known inside your script or shell. Use echo $d000 to show the first one of the newly defined variables. There should be no shell special characters (like * and $) inside your_file. Remove eval $() to see the result of the awk command. The \" quoted %s is to allow spaces in the variable values. If you don't have any spaces in your_file you can remove the \" before and after %s.
I want to create a dictionary in bash from a text file which looks like this:
H96400275|A
H96400276|B
H96400265|C
H96400286|D
Basically I want a dictionary like this from this file file.txt:
KEYS VALUES
H96400275 = A
H96400276 = B
H96400265 = C
H96400286 = D
I created following script:
#!/bin/bash
declare -a dictionary
while read line; do
key=$(echo $line | cut -d "|" -f1)
data=$(echo $line | cut -d "|" -f2)
dictionary[$key]="$data"
done < file.txt
echo ${dictionary[H96400275]}
However, this does not print A, rather it prints D. Can you please help ?
Associative arrays (dictionaries in your terms) are declared using -A, not -a. For references to indexed (ones declared with -a) arrays' elements, bash performs arithmetic expansion on the subscript ($key and H96400275 in this case); so you're basically overwriting dictionary[0] over and over, and then asking for its value; thus D is printed.
And to make this script more effective, you can use read in conjunction with a custom IFS to avoid cuts. E.g:
declare -A dict
while IFS='|' read -r key value; do
dict[$key]=$value
done < file
echo "${dict[H96400275]}"
See Bash Reference Manual ยง 6.7 Arrays.
the only problem is that you have to use -A instead of -a
-a Each name is an indexed array variable (see Arrays above).
-A Each name is an **associative** array variable (see Arrays above).
What you want to do is so named associative array. And to declare it you need to use command:
declare -A dictionary
I need to find pairs of files with a specific pattern in one directory:
HU_IP_number_something.bam & HU_inp_number_something.bam
NOC_IP_number_something.bam & NOC_inp_number_something.bam
Numbers are 1...N for each pair
I have a solution but it works only for one set of files HU_* or NOC_* in one directory.
How can I improve it to find pairs, when both HU_* and NOC_* are in one directory?
for ip in *IP*.bam
do
num=$(echo $ip | sed 's/[^0-9]//g')
input=$(find -name *_inp_${num}*.bam)
echo ip sample: $ip
echo input sample: $input
done
Examples of files in one directory:
HU_inp_1-sorted.bam
HU_IP_1-sorted.bam
NOC_inp_1-sorted.bam
NOC_IP_1-sorted.bam
for 1,2,3,...N
The following builds an array, $a for each iteration of a for loop.
$ for f in *IP*.bam; do s=${f#*_}; a=( *${s} ); declare -p a; done
declare -a a=([0]="HU_IP_number_something.bam" [1]="NOC_IP_number_something.bam")
declare -a a=([0]="HU_IP_number_something.bam" [1]="NOC_IP_number_something.bam")
This works steps through all the files you've specified in your filespec, stripping off the first "field" (as denoted by the underscore separator), and using globbing to collect the relevant files in the array.
You can test for the length of the array (${#a[#]}) to make sure you have two entries.
If you want to group by the second field instead of the first, you need a little more processing:
$ for f in *IP*.bam; do s1=${f%%_*}; s2=${f#*_}; s2=${s2#*_}; a=( ${s1}*${s2} ); declare -p a; done
declare -a a=([0]="HU_IP_number_something.bam" [1]="HU_inp_number_something.bam")
declare -a a=([0]="NOC_IP_number_something.bam" [1]="NOC_inp_number_something.bam")
The technique here, using ${var#pattern} and ${var%pattern} is called Parameter Expansion, and you can find more details about it in the bash man page. Here too.
Do you only want to match HU to HU and NOC to NOC? If so:
If you add a line
pre=$(echo $ip | awk -F "_" '{print $1}')
then change you input to
input=$(find -name $pre_inp_${num}*.bam)
I have a file links.txt:
1 a.sh
3 b.sh
6 c.sh
4 d.sh
So, if i pass 1,4 as parameters to another file(master.sh), a.sh and d.sh should be stored in a variable.
sed '3!d' would print the 3rd line, but not the line that starts with 3. For that, you need sed '/^3 /!d'. The problem is you can't combine them for more lines, as this means "Delete everything that doesn't start with a 3", which means all other lines will be missed. So, use sed -n '/^3 /p' instead, i.e. don't print by default and tell sed what lines to print, not what lines to delete.
You can loop over the argument and create a sed script from them that prints the lines, then run sed using this output:
#!/bin/bash
file=$1
shift
for id in "$#" ; do
echo "/^$id /p"
done | sed -nf- "$file"
Run as script.sh filename 3 4.
If you want to remove the id from the output, you can either use
cut -f2 -d' '
or you can modify the generated sed script to do the work
echo "/^$id /s/.* //p"
i.e. only print if the substitution was successful.
This loops through each argument and greps for it in the links file. The result is piped into cut where we specify the delimiter as a space with -d flag and the field number as 2 with -f flag. Finally this is appended to the array called files.
links="links.txt"
files=()
for arg in $#; do
files=("${files[#]}" `grep "^$arg" "$links" | cut -d" " -f2`)
done;
echo ${files[#]}
Usage:
$ ./master.sh 1 4
a.sh d.sh
Edit:
As pointed out by mklement0, the solution above reads the file once per arg. The following first builds the pattern then reads the file just once.
links="links.txt"
pattern="^$1\s"
for arg in ${#:2}; do
pattern+="|^$arg\s"
done
files=$(grep -E "$pattern" "$links" | cut -d" " -f2)
echo ${files[#]}
Usage:
$ ./master.sh 1 4
a.sh d.sh
Here is another example with grep and cut:
#!/bin/bash
for line in $(grep "$1\|$2" links.txt|cut -d' ' -f2)
do
echo $line
done
Example of usage:
./master.sh 1 4
a.sh
d.sh
Why not just stores the values and call them at will:
items=()
while read -r num file
do
items[num]="$file"
done<links.txt
for arg
do
echo "${items[arg]}"
done
Now you can use the items array any time you like :)
The following awk solution:
preserves the argument order; that is, the results reflect the order in which the lookup values were specified (as opposed to the order in which the lookup values happen to occur in the file).
If that is not important (i.e., if outputting the results in file order is acceptable), the readarray technique below can be combined with this one-liner, which is a generalized variant of Panta's answer:
grep -f <(printf "^%s\n" "$#") links.txt | cut -d' ' -f2-
performs well, because the input file is only read once; the only requirement is that all key-value pairs fit into memory as a whole (as a single associative Awk array (dictionary)).
works with any lookup values that don't have embedded whitespace.
Similarly, the assumption is that the output column values (containing values such as a.sh in the sample input) have no embedded whitespace. awk doesn't handle quoted fields well, so more work would be needed.
#!/bin/bash
readarray -t files < <(
awk -v idList="$*" '
BEGIN { count=split(idList, idArr); for (i in idArr) idDict[idArr[i]]++ }
$1 in idDict { idDict[$1] = $2 }
END { for (i=1; i<=count; ++i) print idDict[idArr[i]] }
' links.txt
)
# Print results.
printf '%s\n' "${files[#]}"
readarray -t files reads stdin input (<) line by line into array variable files.
Note: readarray requires Bash v4+; on Bash 3.x, such as on macOS, replace this part with
IFS=$'\n' read -d '' -ra files
<(...) is a Bash process substitution that, loosely speaking, presents the output from the enclosed command as if it were (self-deleting) temporary file.
This technique allows readarray to run in the current shell (as opposed to a subshell if a pipeline had been used), which is necessary for the files variable to remain defined in the remainder of the script.
The awk command breaks down as follows:
-v idList="$*" passes the space-separated list of all command-line arguments as a single string to Awk variable idList.
Note that this assumes that the arguments have no embedded spaces, which is indeed the case here and also generally the case with identifiers.
BEGIN { ... } is only executed once, before the individual lines are processed:
split(idList, idArr) splits the input ID list into an array by whitespace and stores the result in idArr.
for (i in idArr) idDict[idArr[i]]++ } then converts the (conceptually regular) array into associative array idDict (dictionary), whose keys are the input IDs - this enables efficient lookup by ID later, and also allows storing the lookup result for each ID.
$1 in idDict { idDict[$1] = $2 } is processed for every input line:
Pattern $1 in idDict returns true if the line's first whitespace-separated field ($1) - e.g., 6 - is among the keys (in) of associative array idDict, and, if so, executes the associated action ({...}).
Action { idDict[$1] = $2 } then assigns the second field ($2) - e.g., c.sh - to the iDict entry for key $1.
END { ... } is executed once, after all input lines have been processed:
for (i=1; i<=count; ++i) print idDict[idArr[i]] loops over all input IDs in order and prints each ID's lookup result, which is the value of the dictionary entry with that ID.
I am creating a bash script to modify and summarize information with grep and sed. But it gets stuck.
#!/bin/bash
# This script extracts some basic information
# from text files and prints it to screen.
#
# Usage: ./myscript.sh </path/to/text-file>
#Extract lines starting with ">#HWI"
ONLY=`grep -v ^\>#HWI`
#replaces A and G with R in lines
ONLYR=`sed -e s/A/R/g -e s/G/R/g $ONLY`
grep R $ONLYR | wc -l
The correct way to write a shell script to do what you seem to be trying to do is:
awk '
!/^>#HWI/ {
gsub(/[AG]/,"R")
if (/R/) {
++cnt
}
END { print cnt+0 }
' "$#"
Just put that in the file myscript.sh and execute it as you do today.
To be clear - the bulk of the above code is an awk script, the shell script part is the first and last lines where the shell just calls awk and passes it the input file names.
If you WANT to have intermediate variables then you can create/print them with:
awk '
!/^>#HWI/ {
only = $0
onlyR = only
gsub(/[AG]/,"R",onlyR)
print "only:", only
print "onlyR:", onlyR
if (/R/) {
++cnt
}
END { print cnt+0 }
' "$#"
The above will work robustly, portably, and efficiently on all UNIX systems.
First of all, and as #fedorqui commented - you're not providing grep with a source of input, against which it will perform line matching.
Second, there are some problems in your script, which will result in unwanted behavior in the future, when you decide to manipulate some data:
Store matching lines in an array, or a file from which you'll later read values. The variable ONLY is not the right data structure for the task.
By convention, environment variables (PATH, EDITOR, SHELL, ...) and internal shell variables (BASH_VERSION, RANDOM, ...) are fully capitalized. All other variable names should be lowercase. Since
variable names are case-sensitive, this convention avoids accidentally overriding environmental and internal variables.
Here's a better version of your script, considering these points, but with an open question regarding what you were trying to do in the last line : grep R $ONLYR | wc -l :
#!/bin/bash
# This script extracts some basic information
# from text files and prints it to screen.
#
# Usage: ./myscript.sh </path/to/text-file>
input_file=$1
# Read lines not matching the provided regex, from $input_file
mapfile -t only < <(grep -v '^\>#HWI' "$input_file")
#replaces A and G with R in lines
for((i=0;i<${#only[#]};i++)); do
only[i]="${only[i]//[AG]/R}"
done
# DEBUG
printf '%s\n' "Here are the lines, after relpace:"
printf '%s\n' "${only[#]}"
# I'm not sure what you were trying to do here. Am I gueesing right that you wanted
# to count the number of R's in ALL lines ?
# grep R $ONLYR | wc -l