Bash: get multiple value from text file in the same line - bash

i have a text file with 2 string per line, i need make a curl command whith these vales
text file:
www.example.com 38494740
www.example.org 49347398
www.example.net 94798340
I need create a command curl for lines, eg.
curl www.example.com/38494740
curl www.example.org/49347398
curl www.example.net/94798340
I have considerate while but i have 2 string per line....
UPDATE:
I need use these values as variable, the command can also be this way curl www.exmple.com/foo/38494740

awk -v OFS="/" '{$1=$1}1' curl
www.example.com/38494740
www.example.org/49347398
www.example.net/94798340
Explanation :
OFS defines how your output fields would be separated. Here its set to "/".
{$1=$1} : is to make awk to reconstruct the records so new OFS will come into effect.
1: is awk's default action to print the line.
As per comments :
while read domain sub
do
curl "$domain"/"$sub"
done < curl

This is one foolproof way of getting it done.
#!/bin/bash
while read -r url port; # Read the tab-spaced file for the 'url' and 'port'
do
curl "${url}"/"${port}" # Construct the URL as "url/port" to run curl command on it
done < file

while read hostname number ; do echo "curl ${hostname}/${number}" ; done < inputFile
Output:
curl www.example.com/38494740
curl www.example.org/49347398
curl www.example.net/94798340

One easy solution is to just use tr to translate all spaces to /:
tr ' ' '/' < inputfile

Related

How to convert piped/awk output to string/variable

I'm trying to create a bash function that automatically updates a cli tool. So far I've managed to get this:
update_cli_tool () {
# the following will automatically be redirected to .../releases/tag/vX.X.X
# there I get the location from the header, and remove it to get the full url
latest_release_url=$(curl -i https://github.com/.../releases/latest | grep location: | awk -F 'location: ' '{print $2}')
# to get the version, I get the 8th element from the url .../releases/tag/vX.X.X
latest_release_version=$(echo "$latest_release_url" | awk -F '/' '{print 8}')
# this is where it breaks
# the first part just replaces the "tag" with "download" in the url
full_url="${latest_release_url/tag/download}/.../${latest_release_version}.zip"
echo "$full_url" # or curl $full_url, also fails
}
Expected output: https://github.com/.../download/vX.X.X/vX.X.X.zip
Actual output: -.zip-.../.../releases/download/vX.X.X
When I just echo "latest_release_url: $latest_release_url" (same for version), it prints it correctly, but not when I use the above mentioned flow. When I hardcode the ..._url and ..._version, the full_url works fine. So my guess is I have to somehow capture the output and convert it to a string? Or perhaps concatenate it another way?
Note: I've also used ..._url=`curl -i ...` (with backticks instead of $(...)), but this gave me the same results.
The curl output will use \r\n line endings. The stray carriage return in the url variable is tripping you up. Observe it with printf '%q\n' "$latest_release_url"
Try this:
latest_release_url=$(
curl --silent -i https://github.com/.../releases/latest \
| awk -v RS='\r\n' '$1 == "location:" {print $2}'
)
Then the rest of the script should look right.

Unix sed command - global replacement is not working

I have scenario where we want to replace multiple double quotes to single quotes between the data, but as the input data is separated with "comma" delimiter and all column data is enclosed with double quotes "" got an issue and the same explained below:
The sample data looks like this:
"int","","123","abd"""sf123","top"
So, the output would be:
"int","","123","abd"sf123","top"
tried below approach to get the resolution, but only first occurrence is working, not sure what is the issue??
sed -ie 's/,"",/,"NULL",/g;s/""/"/g;s/,"NULL",/,"",/g' inputfile.txt
replacing all ---> from ,"", to ,"NULL",
replacing all multiple occurrences of ---> from """ or "" or """" to " (single occurrence)
replacing 1 step changes back to original ---> from ,"NULL", to ,"",
But, only first occurrence is getting changed and remaining looks same as below:
If input is :
"int","","","123","abd"""sf123","top"
the output is coming as:
"int","","NULL","123","abd"sf123","top"
But, the output should be:
"int","","","123","abd"sf123","top"
You may try this perl with a lookahead:
perl -pe 's/("")+(?=")//g' file
"int","","123","abd"sf123","top"
"int","","","123","abd"sf123","top"
"123"abcs"
Where input is:
cat file
"int","","123","abd"""sf123","top"
"int","","","123","abd"""sf123","top"
"123"""""abcs"
Breakup:
("")+: Match 1+ pairs of double quotes
(?="): If those pairs are followed by a single "
Using sed
$ sed -E 's/(,"",)?"+(",)?/\1"\2/g' input_file
"int","","123","abd"sf123","top"
"int","","NULL","123","abd"sf123","top"
"int","","","123","abd"sf123","top"
In awk with your shown samples please try following awk code. Written and tested in GNU awk, should work in any version of awk.
awk '
BEGIN{ FS=OFS="," }
{
for(i=1;i<=NF;i++){
if($i!~/^""$/){
gsub(/"+/,"\"",$i)
}
}
}
1
' Input_file
Explanation: Simple explanation would be, setting field separator and output field separator as , for all the lines of Input_file. Then traversing through each field of line, if a field is NOT NULL then Globally replacing all 1 or more occurrences of " with single occurrence of ". Then printing the line.
With sed you could repeat 1 or more times sets of "" using a group followed by matching a single "
Then in the replacement use a single "
sed -E 's/("")+"/"/g' file
For this content
$ cat file
"int","","123","abd"""sf123","top"
"int","","","123","abd"""sf123","top"
"123"""""abcs"
The output is
"int","","123","abd"sf123","top"
"int","","","123","abd"sf123","top"
"123"abcs"
sed s'#"""#"#' file
That works. I will demonstrate another method though, which you may also find useful in other situations.
#!/bin/sh -x
cat > ed1 <<EOF
3s/"""/"/
wq
EOF
cp file stack
cat stack | tr ',' '\n' > f2
ed -s f2 < ed1
cat f2 | tr '\n' ',' > stack
rm -v ./f2
rm -v ./ed1
The point of this is that if you have a big csv record all on one line, and you want to edit a specific field, then if you know the field number, you can convert all the commas to carriage returns, and use the field number as a line number to either substitute, append after it, or insert before it with Ed; and then re-convert back to csv.

Extract and display multiple strings in a single line

I have a single line and i wanna extract/display (from bash) all entire strings starting by specific characters.
Single line to filter:
"ABC-3324545/":{"acc":"fff"},"ABC-652123/":{"acc":"sss"},"ABC-15642/":{"acc":"rrr"}...
Specific character to research in strings: ABC-
Display needed:
ABC-3324545
ABC-652123
ABC-15642
i think i need to combinate multiple cmd like grep awk sed, etc... but unfortunately, no result :(
curl -H "Token: xxxx" $URL | grep -o 'ABC-'
returns
ABC-
ABC-
ABC-
curl -H "Token: xxxx" $URL | awk -F "PKI-" '{ print $1; '}
...don't match with what i wan't to do
Any idea plz?
Data file:
$ cat abc.dat
"ABC-3324545/":{"acc":"fff"},"ABC-652123/":{"acc":"sss"},"ABC-15642/":{"acc":"rrr"}...
"DEF-3324545/":{"acc":"fff"},"DEF-652123/":{"acc":"sss"},"DEF-15642/":{"acc":"rrr"}...
Assuming the desired string a) starts with ABC- and b) ends before the next /, one grep idea:
$ grep -o "ABC-[^/]*" abc.dat
ABC-3324545
ABC-652123
ABC-15642
Where [^/]* says to match everything that is not a /, ie, match everything up to the next /.

extract xml using a variable in awk

I want search an XML file to extract specific XML block containing this string 58B338939C5B1970E1008000AC10E225_HCA_13
I am able to do it via the following command:
awk 'BEGIN{RS="<[/]?WorkResponseMessage>"} /58B338939C5B1970E1008000AC10E225_HCA_13/{print $0,"</WorkResponseMessage>"}' ag1.xml > ag2.xml
My query is I want to pass the search string in a variable from command line and use that variable to search, for example:
awk 'BEGIN{RS="<[/]?WorkResponseMessage>"} /$m/{print $0,"</WorkResponseMessage>"}' ag1.xml > ag2.xml
Here 'm' is my variable. I am able to get the value inside 'm', but it doesn't seem to work with the awk command. I have tried using quotes("",'') for m as well and that doesn't work either. The awk -v option also doesn't work with this
try this -
m="58B338939C5B1970E1008000AC10E225_HCA_13"
echo $m
awk -v m="$m" 'BEGIN{RS="<[/]?WorkResponseMessage>"} $0 ~ m {print $0,"</WorkResponseMessage>"}' ag1.xml > ag2.xml

Remove Leading Spaces from a variable in Bash

I have a script that exports a XML file to my desktop and then extracts all the data in the "id" tags and exports that to a csv file.
xmlstarlet sel -t -m '//id[1]' -v . -n </users/$USER/Desktop/List.xml > /users/$USER/Desktop/List2.csv
I then use the following command to add commas after each number and store it as a variable.
devices=$(sed "s/$/,/g" /users/$USER/Desktop/List2.csv)
If I echo that variable I get an output that looks like this:
123,
124,
125,
etc.
What I need help with is removing those spaces so that output will look like 123,124,125 with no leading space. I've tried multiple solutions that I can't get to work. Any help would be amazing!
If you don't want newlines, don't tell xmlstarlet to put them there in the first place.
That is, change -n to -o , to put a comma after each value rather than a newline:
{ xmlstarlet sel -t -m '//id[1]' -v . -o ',' && printf '\n'; } \
<"/users/$USER/Desktop/List.xml" \
>"/users/$USER/Desktop/List2.csv"
The printf '\n' here puts a final newline at the end of your CSV file after xmlstarlet has finished writing its output.
If you don't want the trailing , this leaves on the output file, the easiest way to be rid of it is to read the result of xmlstarlet into a variable and manipulate it there:
content=$(xmlstarlet sel -t -m '//id[1]' -v . -o ',' <"/users/$USER/Desktop/List.xml")
printf '%s\n' "${content%,}" >"/users/$USER/Desktop/List2.csv"
For a sed solution, try
sed ':a;N;$!ba;y/\n/,/' /users/$USER/Desktop/List2.csv
or if you want a comma even after the last:
sed ':a;N;$!ba;y/\n/,/;s/$/,/' /users/$USER/Desktop/List2.csv
but then more easy would be
cat /users/$USER/Desktop/List2.csv | tr "\n" ","

Resources