I'm having a input like that
field = <0xaaa 0xbbb>;
and would like to extract the two hex values so they can be used in a Makefile.
How would I go about that?
You can remove everything surrounding the two values with:
sed 's/.*<\(.*\)>.*/\1/'
Test:
% echo 'field = <0xaaa 0xbbb>;' | sed 's/.*<\(.*\)>.*/\1/'
0xaaa 0xbbb
If you need to dereference the values to variables:
declare -a values=($(sed 's/.*<\(.*\)>.*/\1/' input_file))
echo "${values[0]}" # 0xaaa
echo "${values[1]}" # 0xbbb
# ... If there are more lines in input_file then the array will be bigger ...
# ${values[2]} will contain next lValue
# ${values[3]} will contain next rValue
# ... and so on ...
Related
I am new to Bash coding. I would like to concatenate a string to each element of a comma-separated strings "array".
This is an example of what I have in mind:
s=a,b,c
# Here a function to concatenate the string "_string" to each of them.
# Expected result:
a_string,b_string,c_string
One way:
$ s=a,b,c
$ echo ${s//,/_string,}_string
a_string,b_string,c_string
Using a proper array is generally a much more robust solution. It allows the values to contain literal commas, whitespace, etc.
s=(a b c)
printf '%s\n' "${s[#]/%/_string}"
As suggested by chepner, you can use IFS="," to merge the result with commas.
(IFS=","; echo "${s[#]/%/_string}")
(The subshell is useful to keep the scope of the IFS reassignment from leaking to the current shell.)
Simply, you could use a for loop
main() {
local input='a,b,c'
local append='_string'
# create an 'output' variable that is empty
local output=
# convert the input into an array called 'items' (without the commas)
IFS=',' read -ra items <<< "$input"
# loop over each item in the array, and append whatever string we want, in this case, '_string'
for item in "${items[#]}"; do
output+="${item}${append},"
done
# in the loop, the comma was re-added back. now, we must remove the so there are only commas _in between_ elements
output=${output%,}
echo "$output"
}
main
I've split it up in three steps:
Make it into an actual array.
Append _string to each element in the array using Parameter expansion.
Turn it back into a scalar (for which I've made a function called turn_array_into_scalar).
#!/bin/bash
function turn_array_into_scalar() {
local -n arr=$1 # -n makes `arr` a reference the array `s`
local IFS=$2 # set the field separator to ,
arr="${arr[*]}" # "join" over IFS and assign it back to `arr`
}
s=a,b,c
# make it into an array by turning , into newline and reading into `s`
readarray -t s < <(tr , '\n' <<< "$s")
# append _string to each string in the array by using parameter expansion
s=( "${s[#]/%/_string}" )
# use the function to make it into a scalar again and join over ,
turn_array_into_scalar s ,
echo "$s"
I am trying to read values from a CSV file dynamically based on the header. Here's how my input files can look like.
File 1:
name,city,age
john,New York,20
jane,London,30
or
File 2:
name,age,city,country
john,20,New York,USA
jane,30,London,England
I may not be following the best way to accomplish this but I tried the following code.
#!/bin/bash
{
read -r line
line=`tr ',' ' ' <<< $line`
while IFS=, read -r `$line`
do
echo $name
echo $city
echo $age
done
} < file.txt
I am expecting the above code read the values of the header as the variable names. I know that the order of columns can be different for the input file. But, I expect the files to have name, city and age columns in the input file. Is this the right approach? If so, what is the fix for the above code if fails with the error - "line7: name: command not found".
The issue is caused by the backticks. Bash will evaluate the contents and replace the backticks with the output from the command it just evaluated.
You can simply use the variable after the read command to achieve what you want:
#!/bin/bash
{
read -r line
line=`tr ',' ' ' <<< $line`
echo "$line"
while IFS=, read -r $line ; do
echo "person: $name -- $city -- $age"
done
} < file.txt
Some notes on your code:
The backtick syntax is legacy syntax, it is now preferred to use $(...) to evaluate commands. The new syntax is more flexible.
You can enable automatic script failure with set -euo pipefail (see here). This will make your script stop if it encounters an error.
You code is currently very sensitive to invalid header data:
with a file like
n ame,age,city,country
john,20,New York,USA
jane,30,London,England
your script (or rather the version in the beginning of my answer) will run without errors but with invalid output.
It is also good practice to quote variables to prevent unwanted splitting.
To make it much more robust, you can change it as follows:
#!/bin/bash
set -euo pipefail
# -e and -o pipefail will make the script exit
# in case of command failure (or piped command failure)
# -u will exit in case a variable is undefined
# (in you case, if the header is invalid)
{
read -r line
readarray -d, -t header < <(printf "%s" "$line")
# using an array allows to detect if one of the header entries
# contains an invalid character
# the printf is needed because bash would add a newline to the
# command input if using heredoc (<<<).
while IFS=, read -r "${header[#]}" ; do
echo "$name"
echo "$city"
echo "$age"
done
} < file.txt
A slightly different approach can let awk handle the field separation and ordering of the desired output given either of the input files. Below awk stores the desired output order in the f[] (field) array set in the BEGIN rule. Then on the first line in a file (FNR==1) the array a[] is deleted and filled with the headings from the current file. At that point you just loop over the field names in-order in the f[] array and output the corresponding field from the current line, e.g.
awk -F, '
BEGIN { f[1]="name"; f[2]="city"; f[3]="age" } # desired order
FNR==1 { # on first line read header
delete a # clear a array
for (i=1; i<=NF; i++) # loop over headings
a[$i] = i # index by heading, val is field no.
next # skip to next record
}
{
print "" # optional newline between outputs
for (i=1; i<=3; i++) # loop over desired field order
if (f[i] in a) # validate field in a array
print $a[f[i]] # output fields value
}
' file1 file2
Example Use/Output
In your case with the content you show in file1 and file2, you would have:
$ awk -F, '
> BEGIN { f[1]="name"; f[2]="city"; f[3]="age" } # desired order
> FNR==1 { # on first line read header
> delete a # clear a array
> for (i=1; i<=NF; i++) # loop over headings
> a[$i] = i # index by heading, val is field no.
> next # skip to next record
> }
> {
> print "" # optional newline between outputs
> for (i=1; i<=3; i++) # loop over desired field order
> if (f[i] in a) # validate field in a array
> print $a[f[i]] # output fields value
> }
> ' file1 file2
john
New York
20
jane
London
30
john
New York
20
jane
London
30
Where both files are read and handled identically despite having different field orderings. Let me know if you have further questions.
If using Bash verison ≥ 4.2, it is possible to use an associative array to capture an arbitrary number of fields with their name as a key:
#!/usr/bin/env bash
# Associative array to store columns names as keys and and values
declare -A fields
# Array to store columns names with index
declare -a column_name
# Array to store row's values
declare -a line
# Commands block consuming CSV input
{
# Read first line to capture column names
IFS=, read -r -a column_name
# Proces records
while IFS=, read -r -a line; do
# Store column values to corresponding field name
for ((i=0; i<${#column_name[#]}; i++)); do
# Fills fields' associative array
fields["${column_name[i]}"]="${line[i]}"
done
# Dump fields for debug|demo purpose
# Processing of each captured value could go there instead
declare -p fields
done
} < file.txt
Sample output with file 1
declare -A fields=([country]="USA" [city]="New York" [age]="20" [name]="john" )
declare -A fields=([country]="England" [city]="London" [age]="30" [name]="jane" )
For older Bash version, without associative array, use indexed column name alternatively:
#!/usr/bin/env bash
# Array to store columns names with index
declare -a column_name
# Array to store values for a line
declare -a value
# Commands block consuming CSV input
{
# Read first line to capture column names
IFS=, read -r -a column_name
# Proces records
while IFS=, read -r -a value; do
# Print record separator
printf -- '--------------------------------------------------\n'
# Print captured field name and value
for ((i=0; i<"${#column_name[#]}"; i++)); do
printf '%-18s: %s\n' "${column_name[i]}" "${value[i]}"
done
done
} < file.txt
Output:
--------------------------------------------------
name : john
age : 20
city : New York
country : USA
--------------------------------------------------
name : jane
age : 30
city : London
country : England
I have a variable equal to a string, which is a series of key/value pairs separated by newlines.
I want to then replace these newline characters with spaces, and set a new variable equal to the result
From various answers on the internet I've arrived at the following:
#test.txt has the content:
#test=example
#what=s0omething
vars="$(cat ./test.txt)"
formattedVars= $("$vars" | tr '\n' ' ')
echo "$taliskerEnvVars"
Problem is when I try to set formattedVars it tries to execute the second line:
script.sh: line 7: test=example
what=s0omething: command not found
I just want formattedVars to equal test=example what=s0omething
What trick am I missing?
Change your line to:
formattedVars=$(tr '\n' ' ' <<< "$secretsContent")
Notice the space of = in your code, which is not permitted in assignment statements.
I see that you are not setting secretsContent in your code, you are setting vars instead.
If possible, use an array to hold contents of the file:
readarray -t vars < ./test.txt # bash 4
or
# bash 3.x
declare -a vars
while IFS= read -r line; do
vars+=( "$line" )
done < ./test.txt
Then you can do what you need with the array. You can make your space-separated list with
formattedVars="${vars[*]}"
, but consider whether you need to. If the goal is to use them as a pre-command modifier, use, for instance,
"${vars[#]}" my_command arg1 arg2
I wrote a simple bash function which would read value from an ini file (defined by variable CONF_FILE) and output it
getConfValue() {
#getConfValue section variable
#return value of a specific variable from given section of a conf file
section=$1
var="$2"
val=$(sed -nr "/\[$section\]/,/\[/{/$var/p}" $CONF_FILE)
val=${val#$var=}
echo "$val"
}
The problem is that it does not ignore comments and runs into troubles if multiple variables within a section names share common substrings.
Example ini file:
[general]
# TEST=old
; TEST=comment
TEST=new
TESTING=this will be output too
PATH=/tmp/test
Running getConfValue general PATH would output /tmp/test as expected, but running getConfValue general TEST shows all the problems this approach has.
How to fix that?
I know there are dedicated tools like crudini or tools for python, perl and php out in the wild, but I do not want to add extra dependencies for simple config file parsing. A solution incorporating awk instead of sed would be just fine too.
Sticking with sed you could anchor your var search to the start of the record using ^ and end it with an equal sign:
"/\[$section\]/,/\[/{/^$var=/p}"
If you are concerned about whitespace in front of your record you could account for that:
"/\[$section\]/,/\[/{/^(\W|)$var=/p}"
That ^(\W|)$var= says "if there is whitespace at the beginning (^(\W) or nothing (|)) before your variable concatenated with an equal sign ($var=)."
If you wanted to switch over to awk you could use something like:
val=$(awk -F"=" -v section=$section -v var=$var '$1=="["section"]"{secFound=1}secFound==1 && $1==var{print $2; secFound=0}' $CONF_FILE)
That awk command splits the record by equal -F"=". Then if the first field in the record is your section ($1=="["section"]") then set variable secFound to 1. Then... if secFound is 1 and the first field is exactly equal to your var variable (secFound==1 && $1==var) then print out the second field ({print $2}) and sets secFound to 0 so we don't pick up any other Test keys.
I encountered this problem and came up with a solution similar to others here.
The main difference is it uses a single awk call to get a response suitable for creating an associative array of the property/value pairs for a section.
This will not ignore the commented properties. Though adding something to do that should not be to hard.
Here's a testing script demonstrating the awk and declare statements used;
#!/bin/bash
#
# Parse a INI style properties file and extract property values for a given section
#
# Author: Alan Carlyle
# License: CC0 (https://creativecommons.org/about/cclicenses/)
#
# Example Input: (example.properties)
# [SEC1]
# item1=value1
# item2="value 2"
#
# [Section 2]
# property 1="Value 1 of 'Section 2'"
# property 2='Single "quoted" value'
#
# Usage:
# $ read_props example.properties "Section 2" property\ 2
# $ Single "quoted" value
#
# Section names and properties with spaces do not need to be quoted.
# Values with spaces must be quoted. Values can use single or double quotes.
# The following characters [ = ] can not be used in names or values.
#
# If the property is not provided the the whole section is outputed.
#
propertiesFile=$1
section=$2
property=$3
# Extract the propetites for the section formated as for associtive array
sectionData="( "$(awk -F'=' -v s="$section" '/^\[/{ gsub(/[\[\]]/, "", $1); f = ($1 == s); next }
NF && f{ print "["$1"]="$2 }' $propertiesFile)" )"
# Create associtive array from extracted section data
declare +x -A "properties=$sectionData"
if [ -z "$property" ]
then
echo $sectionData
else
echo ${properties[$property]}
fi
Greetings!
I have a text file with parameter set as follows:
NameOfParameter Value1 Value2 Value3 ...
...
I want to find needed parameter by its NameOfParameter using regexp pattern and return a selected Value to my Bash script.
I tried to do this with grep, but it returns a whole line instead of Value.
Could you help me to find as approach please?
It was not clear if you want all the values together or only one specific one. In either case, use the power of cut command to cut the columns you want from a file (-f 2- will cut columns 2 and on (so everything except parameter name; -d " " will ensure that the columns are considered to be space-separated as opposed to default tab-separated)
egrep '^NameOfParameter ' your_file | cut -f 2- -d " "
Bash:
values=($(grep '^NameofParameter '))
echo ${values[0]} # NameofParameter
echo ${values[1]} # Value1
echo ${values[2]} # Value2
# etc.
for value in ${values[#:1]} # iterate over values, skipping NameofParameter
do
echo "$value"
done