echo working in a very different way - bash

Here PLUGIN=ABC
$ echo "{\"PluginName\": \"${PLUGIN}\""
""PluginName": "ABC
$ echo "{\"PluginName\":${PLUGIN}\",\"Filename\":\"${VAR}\" , \"ErrorString\":"
","Filename":"ABC" , "ErrorString":eployerProps
However if I change above variable PLUGIN to any other string its working.
$ echo "{\"PluginName\":\"${PLUGINS}\",\"Filename\":\"${VAR}\" , \"ErrorString\":"
{"PluginName":"ABC","Filename":"ABC" , "ErrorString":
Not able to understand whats the reason. This is bash 4 however on other server its working fine.

I cannot reproduce your problem. This is what my bash 4.4.23(1) prints:
$ PLUGIN=ABC
$ echo "{\"PluginName\": \"${PLUGIN}\""
{"PluginName": "ABC"
However if I change above variable PLUGIN to any other string its working.
Have you noticed that your second command differs from the first one?
echo "{\"PluginName\":${PLUGIN}\",\"Filename\":\"${VAR}\" , \"ErrorString\":"
| |
different | \ different
| |
echo "{\"PluginName\":\"${PLUGINS}\",\"Filename\":\"${VAR}\" , \"ErrorString\":"
However, you could make your life a lot easier by using printf:
$ PLUGIN=ABC
$ VAR=XYZ
$ printf '{"PluginName": "%s"\n' "$PLUGIN"
{"PluginName": "ABC"
$ printf '{"PluginName":"%s","Filename":"%s","ErrorString":\n' "$PLUGIN" "$VAR"
{"PluginName":"ABC","Filename":"XYZ","ErrorString":
or even better for a general approach:
$ printf '{'; printf '"%s":"%s",' PluginName "$PLUGIN" Filename "$VAR"
{"PluginName":"ABC","Filename":"XYZ",

Here PLUGIN=ABC
No, that would not explain the output you're seeing. It's much more likely that PLUGIN=$'ABC\r' (i.e. A B C followed by a carriage return).
Carriage return moves the cursor back to the beginning of the line when printed to a terminal, which is why your output looks so confusing.
Try echo "$PLUGIN" | cat -v or echo "$PLUGIN" | xxd (or any other hex dump tool) to see what's actually in there.
But not able to do on a specific server only.
If PLUGIN is the result of reading a line from a file, then this file is probably in Windows/DOS format on that server (with Carriage Return / Line Feed endings) instead of Unix format (Line Feed only).

Related

bash for finding line that contains string and including that as part of command

I have a command that will print out three lines:
1-foo-1
1-bar-1
1-baz-1
I would like to include the result of this as part of a command where I search for the line that contains the string "bar" and then include that entire line as part of a command as follows:
vi 1-bar-1
I was wondering what the bash awk and/or grep combination would be for getting this. Thank you.
I had tried the following but I'm getting the entire output. For example, I'd have a file rows.txt with this content:
1-foo-1
1-bar-1
1-baz-1
and then I'd run echo $(cat rows.txt | awk /^1-baz.*$/) and I'd get 1-foo-1 1-bar-1 1-baz-1 as a result when I'm looking for just 1-baz-1. Thank you.
vi $(echo -e "1-foo-1\n1-bar-1\n1-baz-1\n" | grep bar | awk -F'-' '{print $2}')
The above script would equals vi bar
P.S.
echo -e "1-foo-1\n1-bar-1\n1-baz-1\n" is a demo to mimic your command output.
P.S.
You update the question... Now your goal becomes:
I'm looking for just 1-baz-1.
Then, the solution would be just
cat rows.txt | grep baz
I search for the line that contains the string "bar":
A naive approach would be to just use
vi $(grep -F bar rows.txt)
However, you have to keep in mind a few things:
If your file contains several lines with bar, say
1-bar-1
2-bar-2
the editor will open both files. This may or may not what you want.
Another point to consider: If your file contains a line
1-foobar-1
this would be choosed as well. If you don't want this to happen, use
vi $(grep -Fw bar rows.txt)
The -w option requires that the pattern must be delimited by word boundaries.

Using sed to append to a line in a file

I have a script running to use output from commands that I run using a string from the file that I want to update.
for CLIENT in `cat /home/"$ID"/file.txt | awk '{print $3}'`
do
sed "/$CLIENT/ s/$/ $(sudo bpgetconfig -g $CLIENT -L | grep -i "version name")/" /home/"$ID"/file.txt >> /home/"$ID"/updated_file.txt
done
The output prints out the entire file once for each line with the matching line in question updated.
How do I sort it so that it only sends the matching line to the new file.
The input file contains lines similar to below:
"Host OS" "OS Version" "Hostname"
I want to run a script that will use the hostname to run a command and grab details about an application on the host and then print only the application version to the end of the line with the host in it:
"Host OS" "OS Version" "Hostname" "Application Version
What you're doing is very fragile (e.g. it'll break if the string represented by $CLIENT appears on other lines or multiple times on 1 line or as substrings or contains regexp metachars or...) and inefficient (you're reading file.txt one per iteration of the loop instead of once total) and employing anti-patterns (e.g. using a for loop to read lines of input, plus the UUOC, plus deprecated backticks, etc.)
Instead, let's say the command you wanted to run was printf '%s' 'the_third_string' | wc -c to replace each third string with the count of its characters. Then you'd do:
while read -r a b c rest; do
printf '%s %s %s %s\n' "$a" "$b" "$(printf '%s' "$c" | wc -c)" "$rest"
done < file
or if you had more to do and so it was worth using awk:
awk '{
cmd = "printf \047%s\047 \047" $3 "\047 | wc -c"
if ( (cmd | getline line) > 0 ) {
$3 = line
}
close(cmd)
print
}' file
For example given this input (courtesy of Rabbie Burns):
When chapman billies leave the street,
And drouthy neibors, neibors, meet;
As market days are wearing late,
And folk begin to tak the gate,
While we sit bousing at the nappy,
An' getting fou and unco happy,
We think na on the lang Scots miles,
The mosses, waters, slaps and stiles,
That lie between us and our hame,
Where sits our sulky, sullen dame,
Gathering her brows like gathering storm,
Nursing her wrath to keep it warm.
We get:
$ awk '{cmd="printf \047%s\047 \047"$3"\047 | wc -c"; if ( (cmd | getline line) > 0 ) $3=line; close(cmd)} 1' file
When chapman 7 leave the street,
And drouthy 8 neibors, meet;
As market 4 are wearing late,
And folk 5 to tak the gate,
While we 3 bousing at the nappy,
An' getting 3 and unco happy,
We think 2 on the lang Scots miles,
The mosses, 7 slaps and stiles,
That lie 7 us and our hame,
Where sits 3 sulky, sullen dame,
Gathering her 5 like gathering storm,
Nursing her 5 to keep it warm.
The immediate answer is to use sed -n to not print every line by default, and add a p command where you do want to print. But running sed in a loop is nearly always the wrong thing to do.
The following avoids the useless cat, the don't read lines with for antipattern, the obsolescent backticks, and the loop; but without knowledge of what your files look like, it's rather speculative. In particular, does command need to run for every match separately?
file=/home/"$ID"/file.txt
pat=$(awk '{ printf "\\|$3"}' "$file")
sed -n "/${pat#\\|}/ s/$/ $(command)/p' "$file" >> /home/"$ID"/updated_file.txt
The main beef here is collecting all the patterns we want to match into a single regex, and then running sed only once.
If command needs to be run uniquely for each line, this will not work out of the box. Maybe then turn back to a loop after all. If your task is actually to just run a command for each line in the file, try
while read -r line; do
# set -- $line
# client=$3
printf "%s " "$line"
command
done <file >>new_file
I included but commented out commands to extract the third field into $client before you run command.
(Your private variables should not have all-uppercase names; those are reserved for system variables.)
Perhaps in fact this is all you need:
while read -r os osver host; do
printf "%s " "$os" "$osver" "$host"
command "$host" something something
done </home/"$ID"/file.txt >/home/"$ID"/updated_file.txt
This assumes that the output of command is a well-formed single line of output with a final newline.
This might work for you (GNU sed, bash/dash):
echo "command () { expr length \"\$1\"; }" >> funlib
sed -E 's/^((\S+\s){2})(\S+)(.*)/. .\/funlib; echo "\1$(command "\3")\4"/e' file
As an example of a command, I create a function called command and append it to a file funlib in the current directory.
The sed invocation, sources the funlib and runs the command function in the RHS of the substitution command within an interpolation of string, displayed by the echo command which is made possible by the evaluation flag e.
N.B. The evaluation uses the dash shell or whatever the /bin/sh is symlinked to.

Can't properly print file in Bash

I'm trying to echo the contents of this link and it exhibits what to me is bizarre behavior.
git#gud:/home/git$ URL="https://raw.githubusercontent.com/fivethirtyeight/data/master/births/US_births_1994-2003_CDC_NCHS.csv"
git#gud:/home/git$ content=$(wget $URL -q -O -)
git#gud:/home/git$ echo $content
2003,12,31,3,12374_month,day_of_week,births
I expected this code to print the contents as I see them when I open the link on a browser. But instead, the output, on its entirety, is 2003,12,31,3,12374_month,day_of_week,births, that's it.
I actually see this behaviour locally as well, after downloading the file. Tried it both using curl and simply copy and pasting into a text editor and saving the file. They all exhibit the same behavior. The same happens with cat, cut, head, tail and even awk.
This doesn't happen with other files and works fine on Python. What am I missing? How do I get it to work?
I realize that the file doesn't end with a new line character, but adding it doesn't fix it.
I'm on Ubuntu 18.04.1 LTS and the CLI I'm using is Bash release 4.4.19(1).
The data file uses Mac-style end-of-line markers (carriage return only). When you echo the content, or just cat the file, the lines are all printing over eachother. If you were to view the file with less or vim, you would see the complete content.
Try this:
$ URL="https://raw.githubusercontent.com/fivethirtyeight/data/master/births/US_births_1994-2003_CDC_NCHS.csv"
$ curl -o data.csv "$URL"
The wc command thinks that the file has zero lines:
$ wc -l data.csv
0 data.csv
Now let's translate those end-of-line markers:
$ tr '\r' '\n' < data.csv > data-modified.csv
wc now sees a more reasonable number of lines:
$ wc -l data-modified.csv
3652 data-modified.csv
And if we were to cat the file:
$ cat data-modified.csv
.
.
.
2003,12,28,7,7645
2003,12,29,1,12823
2003,12,30,2,14438
2003,12,31,3,12374

Unix - How to convert octal escape sequences via pipe

I'm pulling data from a file (in this case an exim mail log) and often it saves characters in an escaped octal sequence like \NNN where 'N' represents an octal value 0-7. This mainly happens when the subject is written in non-Latin characters (Arabic for example).
My goal is to find the cleanest way to convert these octal characters to display correctly in my utf-8 enabled terminal, specifically in 'less' as there is the potential for lots of output.
The best approach I have found so far is as follows:
arbitrary_stream | { while read -r temp; do printf %b "$temp\n"; done } | less
This seems to work pretty well, however I would assume that there is some translator tool, or maybe even a flag built into 'less' to handle this. I also found that if you use something like sed to inject a 0 after each \, you can store it as a variable, then use 'echo -e $data' however this was more messy than the previous solution.
Test case:
octalvar="\342\202\254"
expected output in less:
€
I'm looking for something cleaner, more complete or just better than my above solution in the form of either:
echo $octalvar | do_something | less
or
echo $octalvar | less --some_magic_flag
Any suggestions? Or is my solution about as clean as I can expect?
Conversion in GNU awk (for using strtonum). It proved out to be a hassle so the code is a mess and maybe could be streamlined, feel free to advice:
awk '{
while(match($0,/\\[0-8]{3}/)) { # search for \NNNs
o=substr($0,RSTART,RLENGTH) # extract it
sub(/\\/,"0",o) # replace \ with 0 for strtonum
c=sprintf("%c",strtonum(o)) # convert to a character
sub(/\\[0-8]{3}/,c) # replace the \NNN with the char
}
}1' foo > bar
or paste the code between single quotes to a file above_program.awk and run it like awk -f above_program.awk foo > bar. Test file foo:
test 123 \342\202\254
Run it in a non-UTF8 locale, I used locale C:
$ locale
...
LC_ALL=C
$ awk -f above_program.awk foo
test 123 €
If you run it a UTF8 locale, conversion will happen:
$ locale
...
LC_ALL=en_US.utf8
$ awk -f above_program.awk foo
test 123 â¬
This is my current version:
echo $arbitrary | { IFS=$'\n'; while read -r temp; do printf %b "$temp\n"; done; unset IFS; } | iconv -f utf-8 -t utf-8 -c | less

Bash show charcaters if not in string

I am trying out bash, and I am trying to make a simple hangman game now.
Everything is working but I don't understand how to do one thing:
I am showing the user the word with guessed letters (so for example is the world is hello world, and the user guessed the 'l' I show them **ll* ***l* )
I store the letters that the user already tried in var guess
I do that with the following:
echo "${word//[^[:space:]$guess]/*}"
The thing I want to do now is echo the alphabet, but leave out the letters that the user already tried, so in this case show the full alphabet without the L.
I already tried to do it the same way as I shown just yet, but it won't quite work.
If you need any more info please let me know.
Thanks,
Tim
You don't show what you tried, but parameter expansion works fine.
$ alphabet=abcdefghijklmnopqrstuvwxyz
$ word="hello world"
$ guesses=aetl
$ echo "${word//[^[:space:]$guesses]/*}"
*ell* ***l*
$ echo "${alphabet//[$guesses]/*}"
*bcd*fghijk*mnopqrs*uvwxyz
First store both strings in files where they are stored one char per line:
sed 's/./&\n/g' | sort <<< $guess > guessfile
sed 's/./&\n/g' | sort <<< $word > wordfile
Then we can filter the words that are only present in one of the files and paste the lines together as a string:
grep -xvf guessfile wordfile | paste -s -d'\0'
And of course we clean up after ourselves:
rm wordfile
rm guessfile
If the output is not correct, try switching arguments in grep (i.e. wordfile guessfile instead of guessfile wordfile).

Resources