add forward slash in bash to a variable string output - bash

I am pulling my hair on this one for days.. very annoying..
I know it is something to do with utf-8 and string not outputting the proper format, but cant figure out what..
This is the code:
#!/bin/bash
#test
REGURL=http://bugs.ws
CHECKURL=$(curl -m 3 -sk --head "$REGURL" | grep -i "location" | awk '{print $2}')
if [[ "${CHECKURL: -1}" != *'/'* ]] # if redirected url does not contain / at end, we need to add it
then
CHECKURL+='/'
echo "$CHECKURL"
fi
This is doing some character substitution rather than simply adding the '/' after the URL..
it works when you do it without a piped curl, grep, so i know it is something dealing with grep or curl..
Basically the outcome MUST have a forward slash at the end, ex: http://bugs.ws will end up having a location redirect of https://alphaterminte.com but I need to add a '/' to the end of "alphatermite.com", I've tried it all, I just cant get the forward slash to go after the variable result.. it keeps substituting it for the first character of the grepped result.. UGHHHH (yes this test code needs to be in bash)

The output from curl is uses carriage return + linefeed line terminators; unix tools only expect linefeed, and treat the carriage return as part of the line's content. Net result: CHECKURL has a not-normally-visible carriage return character at the end, which confuses everything.
Specifically, CHECKURL winds up containing "https://alphatermite.com<carriage return>/", which prints something like:
https://alphatermite.com
/
...except with only a carriage return (no linefeed) between, the "/" prints over top of the "h" in "https".
Solution: you could add | tr -d '\r' to the pipeline creating CHECKURL, but I'd just have awk do everything in one step:
CHECKURL=$(curl -m 3 -sk --head "$REGURL" | awk '/^[Ll]ocation:/ {sub("\r", "", $2); print $2}')
Explanation: the /^[Ll]ocation:/" part makes it only print the Location (or "location") header, and sub("\r", "", $2) deletes the carriage return from $2 before it's printed.
BTW, I'd use this to test for "/" at the end of the string:
if [[ "${CHECKURL}" != *'/' ]]
You can either extract the last character and see if it's "/", or use a wildcard pattern to check if it ends with "/"; no need to do both.
BTW2, I also recommend using lower- or mixed-case variable names, to avoid accidentally using one of the many all-caps names that have special meanings (and hence unexpected consequences).

Try bash regex
url=http://bugs.ws
re='.*/$'
[[ $url =~ $re ]] || url+='/'

Related

find & replace only exact match between delimiters in string values

I have a string value stored in a variable:
PTYPE="Other Farm|Raised Ranch|Farm house|Other|A-Frame|Log Home"
I want to find & replace Other with some value like NOTHING. All values are stored in variables.
WhatToChange=Other
NewValue=NOTHING
echo $PTYPE|sed -e "s#${WhatToChange}#${NewValue}#g"
This is replacing all the occurances of Other and getting output like:
NOTHING Farm|Raised Ranch|Farm house|NOTHING|A-Frame|Log Home
Is there any way I can exactly change only the exact one? The place for ${WhatToChange} is variable.
As you have well defined fields and want an exact match, awk could be easier to use than sed; at the very least, you won't have to worry about escaping the strings for using it in the sed expression:
echo "Other Farm|Raised Ranch|Farm house|Other|A-Frame|Log Home" |
awk -v old="Other" -v new="NOTHING" \
'BEGIN {FS = OFS = "|"} {for(i=1;i<=NF;i++) if($i == old) $i = new} 1'
output:
Other Farm|Raised Ranch|Farm house|NOTHING|A-Frame|Log Home
To match either the exact character | or the beginning of the line, use ([|]|^).
To match either the exact character | or the end of the line, use ([|]|$).
To put a | back in place only when appropriate, store these in match groups, and refer to those groups with \1 or \2:
PTYPE="Other Farm|Raised Ranch|Farm house|Other|A-Frame|Log Home"
WhatToChange=Other
NewValue=NOTHING
sed -re "s#(^|[|])${WhatToChange}($|[|])#\1${NewValue}\2#g" <<<"$PTYPE"
...emits as output:
Other Farm|Raised Ranch|Farm house|NOTHING|A-Frame|Log Home
...and still works even if WhatToChange is matched at the beginning or end of the list.
For fun, some perl:
This is like #Charles's sed solution: Note the \Q...\E so that the "to change" value is treated as literal text.
echo "$PTYPE" | perl -spe '
s{ (?:^|\|)\K \Q$WhatToChange\E (?=\||$) }{$NewValue}gx
' -- -WhatToChange=Other -NewValue=NOTHING
This is like #Fravadona's awk solution:
echo "$PTYPE" | perl -F'[|]' -sane '
print join "|", map {$_ eq $WhatToChange ? $NewValue : $_} #F
' -- -WhatToChange=Other -NewValue=NOTHING
How about
echo ${PTYPE//$WhatToChange/$NewValue}
UPDATE:
I just realized that the replacement should happen only if WhatToChange is the whole content between two separators (|). In this case, we can do it in bash as well (without the need to revert to a child process):
if [[ $PTYPE =~ (.*[|]|^)$WhatToChange([|].*|$) ]]
echo "${BASH_REMATCH[1]}${NewValue}${BASH_REMATCH[2]}"
fi
UPDATE (based on the comment by Fravadona):
Used in this way, WhatToChange is interpreted as a regular expression. This can be useful, if you want to catch for instance variations of the string, for instance
WhatToChange='[Oo]ther' # to catch Other and other
If you always want to have a literal match, you have to quote the variable:
[[ $PTYPE =~ (.*[|]|^)"$WhatToChange"([|].*|$) ]]
This might work for you (GNU sed & bash):
<<<"$PTYPE" sed 'y/|/\n/;s/^'"$WhatToChange"'$/'"$NewValue"'/mg;y/\n/|/'
Input $PTYPE as a here-string into sed.
Translate | separators to newlines.
Replace $WhatToChange to $NewValue for each matched line.
Translate newlines back to |'s.
N.B. The use of the m flag in the substitution command allows sed to work in multiline mode and this presents each value between separators on its own line.
An alternative:
sed -z 'y/|/\x00/;s/^'"$WhatToChange"'$/'"$NewValue"'/mg;y/\x00/|/;' file

convert a file content using shell script

Hello everyone I'm a beginner in shell coding. In daily basis I need to convert a file's data to another format, I usually do it manually with Text Editor. But I often do mistakes. So I decided to code an easy script who can do the work for me.
The file's content like this
/release201209
a1,a2,"a3",a4,a5
b1,b2,"b3",b4,b5
c1,c2,"c3",c4,c5
to this:
a2>a3
b2>b3
c2>c3
The script should ignore the first line and print the second and third values separated by '>'
I'm half way there, and here is my code
#!/bin/bash
#while Loops
i=1
while IFS=\" read t1 t2 t3
do
test $i -eq 1 && ((i=i+1)) && continue
echo $t1|cut -d\, -f2 | { tr -d '\n'; echo \>$t2; }
done < $1
The problem in my code is that the last line isnt printed unless the file finishes with an empty line \n
And I want the echo to be printed inside a new CSV file(I tried to set the standard output to my new file but only the last echo is printed there).
Can someone please help me out? Thanks in advance.
Rather than treating the double quotes as a field separator, it seems cleaner to just delete them (assuming that is valid). Eg:
$ < input tr -d '"' | awk 'NR>1{print $2,$3}' FS=, OFS=\>
a2>a3
b2>b3
c2>c3
If you cannot just strip the quotes as in your sample input but those quotes are escaping commas, you could hack together a solution but you would be better off using a proper CSV parsing tool. (eg perl's Text::CSV)
Here's a simple pipeline that will do the trick:
sed '1d' data.txt | cut -d, -f2-3 | tr -d '"' | tr ',' '>'
Here, we're just removing the first line (as desired), selecting fields 2 & 3 (based on a comma field separator), removing the double quotes and mapping the remaining , to >.
Use this Perl one-liner:
perl -F',' -lane 'next if $. == 1; print join ">", map { tr/"//d; $_ } #F[1,2]' in_file
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
-F',' : Split into #F on comma, rather than on whitespace.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches

Convert multi-line csv to single line using Linux tools

I have a .csv file that contains double quoted multi-line fields. I need to convert the multi-line cell to a single line. It doesn't show in the sample data but I do not know which fields might be multi-line so any solution will need to check every field. I do know how many columns I'll have. The first line will also need to be skipped. I don't how much data so performance isn't a consideration.
I need something that I can run from a bash script on Linux. Preferably using tools such as awk or sed and not actual programming languages.
The data will be processed further with Logstash but it doesn't handle double quoted multi-line fields hence the need to do some pre-processing.
I tried something like this and it kind of works on one row but fails on multiple rows.
sed -e :0 -e '/,.*,.*,.*,.*,/b' -e N -e '1n;N;N;N;s/\n/ /g' -e b0 file.csv
CSV example
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
The output I want is
First name,Last name,Address,ZIP
John,Doe,Country City Street,12345
Jane,Doe,Country City Street,67890
etc.
etc.
First my apologies for getting here 7 months late...
I came across a problem similar to yours today, with multiple fields with multi-line types. I was glad to find your question but at least for my case I have the complexity that, as more than one field is conflicting, quotes might open, close and open again on the same line... anyway, reading a lot and combining answers from different posts I came up with something like this:
First I count the quotes in a line, to do that, I take out everything but quotes and then use wc:
quotes=`echo $line | tr -cd '"' | wc -c` # Counts the quotes
If you think of a single multi-line field, knowing if the quotes are 1 or 2 is enough. In a more generic scenario like mine I have to know if the number of quotes is odd or even to know if the line completes the record or expects more information.
To check for even or odd you can use the mod operand (%), in general:
even % 2 = 0
odd % 2 = 1
For the first line:
Odd means that the line expects more information on the next line.
Even means the line is complete.
For the subsequent lines, I have to know the status of the previous one. for instance in your sample text:
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
You can say line 1 (John,Doe,"Country) has 1 quote (odd) what means the status of the record is incomplete or open.
When you go to line 2, there is no quote (even). Nevertheless this does not mean the record is complete, you have to consider the previous status... so for the lines following the first one it will be:
Odd means that record status toggles (incomplete to complete).
Even means that record status remains as the previous line.
What I did was looping line by line while carrying the status of the last line to the next one:
incomplete=0
cat file.csv | while read line; do
quotes=`echo $line | tr -cd '"' | wc -c` # Counts the quotes
incomplete=$((($quotes+$incomplete)%2)) # Check if Odd or Even to decide status
if [ $incomplete -eq 1 ]; then
echo -n "$line " >> new.csv # If line is incomplete join with next
else
echo "$line" >> new.csv # If line completes the record finish
fi
done
Once this was executed, a file in your format generates a new.csv like this:
First name,Last name,Address,ZIP
John,Doe,"Country City Street",12345
I like one-liners as much as everyone, I wrote that script just for the sake of clarity, you can - arguably - write it in one line like:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l " || echo "$l";done >new.csv
I would appreciate it if you could go back to your example and see if this works for your case (which you most likely already solved). Hopefully this can still help someone else down the road...
Recovering the multi-line fields
Every need is different, in my case I wanted the records in one line to further process the csv to add some bash-extracted data, but I would like to keep the csv as it was. To accomplish that, instead of joining the lines with a space I used a code - likely unique - that I could then search and replace:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l ~newline~ " || echo "$l";done >new.csv
the code is ~newline~, this is totally arbitrary of course.
Then, after doing my processing, I took the csv text file and replaced the coded newlines with real newlines:
sed -i 's/ ~newline~ /\n/g' new.csv
References:
Ternary operator: https://stackoverflow.com/a/3953666/6316852
Count char occurrences: https://stackoverflow.com/a/41119233/6316852
Other peculiar cases: https://www.linuxquestions.org/questions/programming-9/complex-bash-string-substitution-of-csv-file-with-multiline-data-937179/
TL;DR
Run this:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l " || echo "$l";done >new.csv
... and collect results in new.csv
I hope it helps!
If Perl is your option, please try the following:
perl -e '
while (<>) {
$str .= $_;
}
while ($str =~ /("(("")|[^"])*")|((^|(?<=,))[^,]*((?=,)|$))/g) {
if (($el = $&) =~ /^".*"$/s) {
$el =~ s/^"//s; $el =~ s/"$//s;
$el =~ s/""/"/g;
$el =~ s/\s+(?!$)/ /g;
}
push(#ary, $el);
}
foreach (#ary) {
print /\n$/ ? "$_" : "$_,";
}' sample.csv
sample.csv:
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
John,Doe,"Country
City
Street",67890
Result:
First name,Last name,Address,ZIP
John,Doe,Country City Street,12345
John,Doe,Country City Street,67890
This might work for you (GNU sed):
sed ':a;s/[^,]\+/&/4;tb;N;ba;:b;s/\n\+/ /g;s/"//g' file
Test each line to see that it contains the correct number of fields (in the example that was 4). If there are not enough fields, append the next line and repeat the test. Otherwise, replace the newline(s) by spaces and finally remove the "'s.
N.B. This may be fraught with problems such as ,'s between "'s and quoted "'s.
Try cat -v file.csv. When the file was made with Excel, you might have some luck: When the newlines in a field are a simple \n and the newline at the end is a \r\n (which will look like ^M), parsing is simple.
# delete all newlines and replace the ^M with a new newline.
tr -d "\n" < file.csv| tr "\r" "\n"
# Above two steps with one command
tr "\n\r" " \n" < file.csv
When you want a space between the joined line, you need an additional step.
tr "\n\r" " \n" < file.csv | sed '2,$ s/^ //'
EDIT: #sjaak commented this didn't work is his case.
When your broken lines also have ^M you still can be a lucky (wo-)man.
When your broken field is always the first field in double quotes and you have GNU sed 4.2.2, you can join 2 lines when the first line has exactly one double quote.
sed -rz ':a;s/(\n|^)([^"]*)"([^"]*)\n/\1\2"\3 /;ta' file.csv
Explanation:
-z don't use \n as line endings
:a label for repeating the step after successful replacement
(\n|^) Search after a newline or the very first line
([^"]*) Substring without a "
ta Go back to label a and repeat
awk pattern matching is working.
answer in one line :
awk '/,"/{ORS=" "};/",/{ORS="\n"}{print $0}' YourFile
if you'd like to drop quotes, you could use:
awk '/,"/{ORS=" "};/",/{ORS="\n"}{print $0}' YourFile | sed 's/"//gw NewFile'
but I prefer to keep it.
to explain the code:
/Pattern/ : find pattern in current line.
ORS : indicates the output line record.
$0 : indicates the whole of the current line.
's/OldPattern/NewPattern/': substitude first OldPattern with NewPattern
/g : does the previous action for all OldPattern
/w : write the result to Newfile

What ##*/ does in bash? [duplicate]

I have a string like this:
/var/cpanel/users/joebloggs:DNS9=domain.example
I need to extract the username (joebloggs) from this string and store it in a variable.
The format of the string will always be the same with exception of joebloggs and domain.example so I am thinking the string can be split twice using cut?
The first split would split by : and we would store the first part in a variable to pass to the second split function.
The second split would split by / and store the last word (joebloggs) into a variable
I know how to do this in PHP using arrays and splits but I am a bit lost in bash.
To extract joebloggs from this string in bash using parameter expansion without any extra processes...
MYVAR="/var/cpanel/users/joebloggs:DNS9=domain.example"
NAME=${MYVAR%:*} # retain the part before the colon
NAME=${NAME##*/} # retain the part after the last slash
echo $NAME
Doesn't depend on joebloggs being at a particular depth in the path.
Summary
An overview of a few parameter expansion modes, for reference...
${MYVAR#pattern} # delete shortest match of pattern from the beginning
${MYVAR##pattern} # delete longest match of pattern from the beginning
${MYVAR%pattern} # delete shortest match of pattern from the end
${MYVAR%%pattern} # delete longest match of pattern from the end
So # means match from the beginning (think of a comment line) and % means from the end. One instance means shortest and two instances means longest.
You can get substrings based on position using numbers:
${MYVAR:3} # Remove the first three chars (leaving 4..end)
${MYVAR::3} # Return the first three characters
${MYVAR:3:5} # The next five characters after removing the first 3 (chars 4-9)
You can also replace particular strings or patterns using:
${MYVAR/search/replace}
The pattern is in the same format as file-name matching, so * (any characters) is common, often followed by a particular symbol like / or .
Examples:
Given a variable like
MYVAR="users/joebloggs/domain.example"
Remove the path leaving file name (all characters up to a slash):
echo ${MYVAR##*/}
domain.example
Remove the file name, leaving the path (delete shortest match after last /):
echo ${MYVAR%/*}
users/joebloggs
Get just the file extension (remove all before last period):
echo ${MYVAR##*.}
example
NOTE: To do two operations, you can't combine them, but have to assign to an intermediate variable. So to get the file name without path or extension:
NAME=${MYVAR##*/} # remove part before last slash
echo ${NAME%.*} # from the new var remove the part after the last period
domain
Define a function like this:
getUserName() {
echo $1 | cut -d : -f 1 | xargs basename
}
And pass the string as a parameter:
userName=$(getUserName "/var/cpanel/users/joebloggs:DNS9=domain.example")
echo $userName
What about sed? That will work in a single command:
sed 's#.*/\([^:]*\).*#\1#' <<<$string
The # are being used for regex dividers instead of / since the string has / in it.
.*/ grabs the string up to the last backslash.
\( .. \) marks a capture group. This is \([^:]*\).
The [^:] says any character _except a colon, and the * means zero or more.
.* means the rest of the line.
\1 means substitute what was found in the first (and only) capture group. This is the name.
Here's the breakdown matching the string with the regular expression:
/var/cpanel/users/ joebloggs :DNS9=domain.example joebloggs
sed 's#.*/ \([^:]*\) .* #\1 #'
Using a single Awk:
... | awk -F '[/:]' '{print $5}'
That is, using as field separator either / or :, the username is always in field 5.
To store it in a variable:
username=$(... | awk -F '[/:]' '{print $5}')
A more flexible implementation with sed that doesn't require username to be field 5:
... | sed -e s/:.*// -e s?.*/??
That is, delete everything from : and beyond, and then delete everything up until the last /. sed is probably faster too than awk, so this alternative is definitely better.
Using a single sed
echo "/var/cpanel/users/joebloggs:DNS9=domain.example" | sed 's/.*\/\(.*\):.*/\1/'
I like to chain together awk using different delimitators set with the -F argument. First, split the string on /users/ and then on :
txt="/var/cpanel/users/joebloggs:DNS9=domain.com"
echo $txt | awk -F"/users/" '{print$2}' | awk -F: '{print $1}'
$2 gives the text after the delim, $1 the text before it.
I know I'm a little late to the party and there's already good answers, but here's my method of doing something like this.
DIR="/var/cpanel/users/joebloggs:DNS9=domain.example"
echo ${DIR} | rev | cut -d'/' -f 1 | rev | cut -d':' -f1

awk split on a different token

I am trying to initialize an array from a string split using awk.
I am expecting the tokens be delimited by ",", but somehow they don't.
The input is a string returned by curl from the address http://www.omdbapi.com/?i=&t=the+campaign
I've tried to remove any extra carriage return or things that could cause confusion, but in all clients I have checked it looks to be a single line string.
{"Title":"The Campaign","Year":"2012","Rated":"R", ...
and this is the ouput
-metadata {"Title":"The **-metadata** Campaign","Year":"2012","Rated":"R","....
It should have been
-metadata {"Title":"The Campaign"
Here's my piece of code:
__tokens=($(echo $omd_response | awk -F ',' '{print}'))
for i in "${__tokens[#]}"
do
echo "-metadata" $i"
done
Any help is welcome
I would take seriously the comment by #cbuckley: Use a json-aware tool rather than trying to parse the line with simple string tools. Otherwise, your script will break if a quoted-string has an comma inside, for example.
At any event, you don't need awk for this exercise, and it isn't helping you because the way awk breaks the string up is only of interest to awk. Once the string is printed to stdout, it is still the same string as always. If you want the shell to use , as a field delimiter, you have to tell the shell to do so.
Here's one way to do it:
(
OLDIFS=$IFS
IFS=,
tokens=($omd_response)
IFS=$OLDIFS
for token in "${tokens[#]}"; do
# something with token
done
)
The ( and ) are just to execute all that in a subshell, making the shell variables temporaries. You can do it without.
First, please accept my apologies: I don't have a recent bash at hand so I can't try the code below (no arrays!)
But it should work, or if not you should be able to tweak it to work (or ask underneath, providing a little context on what you see, and I'll help fix it)
nb_fields=$(echo "${omd_response}" | tr ',' '\n' | wc -l | awk '{ print $1 }')
#The nb_fields will be correct UNLESS ${omd_response} contains a trailing "\",
#in which case it would be 1 too big, and below would create an empty
# __tokens[last_one], giving an extra `-metadata ""`. easily corrected if it happens.
#the code below assume there is at least 1 field... You should maybe check that.
#1) we create the __tokens[] array
for field in $( seq 1 $nb_fields )
do
#optionnal: if field is 1 or $nb_fields, add processing to get rid of the { or } ?
${__tokens[$field]}=$(echo "${omd_response}" | cut -d ',' -f ${field})
done
#2) we use it to output what we want
for i in $( seq 1 $nb_fields )
do
printf '-metadata "%s" ' "${__tokens[$i]}"
#will output all on 1 line.
#You could add a \n just before the last ' so it goes each on different lines
done
so I loop on field numbers, instead of on what could be some space-or-tab separated values

Resources