Print a file in one single row ksh - shell

I have the file DATA, and within it there is:
Name | Karlstrom|
Description | New_Server|
Type | UNIX OS|
Formula | y=kx+j |
Severity | Critical|
I need to know how to display the data like this:
Name| Karlstrom|Description| New_Server|Type UNIX OS|Formula| y=kx+j|Severity| Critical|
USING KORN SHELL | KSH

The requirements do not explain all cases, but the following code will handle your example input:
sed -e 's/ *|/|/' DATA | tr -d "\n"; echo
# Output:
Name| Karlstrom|Description| New_Server|Type| UNIX OS|Formula| y=kx+j |Severity| Critical|
I added an echo after the command, so that the command prompt will be on the next line.

I don't think that shell matters here. Do you have cat, awk and sed utilities? You can do this, for example:
cat DATA | awk 'BEGIN {s=""} {s=s$0} END {print s}' | sed 's/ *//g'

Related

How to save result in separate column in excel from shell script?

I am saving the executed result in one of the excel sheet. The result will show in new rows like below:
I have used the below command :
$ bash eg.sh Behatscripts.txt | egrep -w 'Executing the|scenario' >> output.xls
I want to display result like below:
| A | B | c |
1 Executing the script:cap_dutch_home 1 scenario(1passed)
2 Executing the script:cap_english_home 1 scenario(1passed)
One more thing is while executing it will create output.xls separate file, instead of using already existed.
Thanks for any suggestions.
You can use this;
with awk;
bash eg.sh Behatscripts.txt | egrep -w 'Executing the|scenario' | awk 'BEGIN {print "Column_A\tColumn_B"}NR%2{printf "%s \t",$0;next;}1' output.xls
without egrep
bash eg.sh Behatscripts.txt | awk '/Executing the|scenario/' | awk 'BEGIN {print "Column_A\tColumn_B"}NR%2{printf "%s \t",$0;next;}1' >> output.xls

Oneline file-monitoring

I have a logfile continously filling with stuff.
I wish to monitor this file, grep for a specific line and then extract and use parts of that line in a curl command.
I had a look at How to grep and execute a command (for every match)
This would work in a script but I wonder if it is possible to achieve this with the oneliner below using xargs or something else?
Example:
Tue May 01|23:59:11.012|I|22|Event to process : [imsi=242010800195809, eventId = 242010800195809112112, msisdn=4798818181, inbound=false, homeMCC=242, homeMNC=01, visitedMCC=238, visitedMNC=01, timestamp=Tue May 12 11:21:12 CEST 2015,hlr=null,vlr=4540150021, msc=4540150021 eventtype=S, currentMCC=null, currentMNC=null teleSvcInfo=null camelPhases=null serviceKey=null gprsenabled= false APNlist: null SGSN: null]|com.uws.wsms2.EventProcessor|processEvent|139
Extract the fields I want and semi-colon separate them:
tail -f file.log | grep "Event to process" | awk -F'=' '{print $2";"$4";"$12}' | tr -cd '[[:digit:].\n.;]'
Curl command, e.g. something like:
http://user:pass#www.some-url.com/services/myservice?msisdn=...&imsi=...&vlr=...
Thanks!
Try this:
tail -f file.log | grep "Event to process" | awk -F'=' '{print $2" "$4" "$12; }' | tr -cd '[[:digit:].\n. ]' |while read msisdn imsi vlr ; do curl "http://user:pass#www.some-url.com/services/myservice?msisdn=$msisdn&imsi=$imsi&vlr=$vlr" ; done

Error calling system() within awk

I'm trying to execute a system command to find out how many unique references a csv file has in its first seven characters as part of a larger awk script that processes the same csv file. There are duplicate entries and I don't want awk to parse the whole file twice so I'm avoiding NR. The gist of this part of the script is:
#!/bin/bash
awk '
{
#do some stuff, then when finished, count the number of unique references
productFile="BusinessObjects.csv";
systemCall = sprintf( "cat %s | cut -c 1-7 | sort | uniq | wc -l", $productFile );
productCount=`system( systemCall )`-1; #subtract 1 to remove column label row
}' < BusinessObjects.csv
And the interpreter doesn't like it:
awk: cmd. line:19: ^ syntax error ./awkscript.sh: line 38: syntax error near unexpected token '('
./awkscript.sh: line 38: systemCall = sprintf( "cat %s | cut -c 1-7 | sort | uniq | wc -l", $productFile );
If I hard-code the system command
productCount=`system( "cat BusinessObjects.csv | cut -c 1-7 | sort | uniq | wc -l" )`-1;
I get:
./awkscript.sh: command substitution: line 39: syntax error near unexpected token '"cat BusinessObjects.csv | cut -c 1-7 | sort | uniq | wc -l"'
./awkscript.sh: command substitution: line 39: 'system( "cat BusinessObjects.csv | cut -c 1-7 | sort | uniq | wc -l" )'
Technically, I could do this outside of awk at the start of the shell script, store the result in a system variable, and then pass it to awk using -v, but it's not great for the readability of the awk script (it's a few hundred lines long). Do I have a space or quotes in the wrong place? I've tried fiddling, but I can't seem to present the call to system() in a way that the interpreter will accept. Finally, is there a more sensible way to do this?
Edit: the csv file is indeed semicolon-delimited, so it's best to cut using the delimiter rather than the number of chars (thanks!).
ProductRef;Data1;Data2;etc
1234567;etc;etc;etc
Edit 2:
I'm trying to parse a csv file whose first column is full of N unique product references, and create a series of associated HTML pages that include a "Page n of N" information field. It's (painfully obviously) the first time I've used awk, but it seemed like an appropriate tool for parsing csv files. I'm trying to hence count and return the number of unique references. At the shell
cut -d\; -f1 BusinessObjects.csv | sort | uniq | wc -l
works fine, but I can't get it working inside awk by doing
#!/bin/bash
if [ -n "$1" ]
then
productFile=$1
else
echo "Missing product file argument."
exit
fi
awk -v productFile=$productFile '
BEGIN {
FS=";";
productCount = 0;
("cut -d\"\;\" -f1 " productFile " | sort | uniq | wc -l") | getline productCount;
productCount -=1; #remove the column label row
}
{
print productCount;
}'
I get a syntax error on the cut code if I don't wrap the semicolon in \"\;\" and the script just hangs without printing anything when I do.
I don't remember that you can use backticks in awk.
productCount=`system( systemCall )`-1; #subtract 1 to remove column label row
You can read your output by not using system and running your command directly, and using getline instead:
systemCall | getline productCount
productCount -= 1
Or more completely
productFile = "BusinessObjects.csv"
systemCall = "cut -c 1-7 " productFile " | sort | uniq | wc -l"
systemCall | getline productCount
productCount -= 1
No need to use sprintf and include cat.
Assigning strings to variables is also optional. You can just have "xyz" | getline ....
sort | uniq can just be sort -u if supported.
Quoting may be necessary if filename has spaces or characters that may confuse the command.
getline may alter global variables differently from expected. See https://www.gnu.org/software/gawk/manual/html_node/Getline.html.
Could something like this be an option?
$ cat productCount.sh
#!/bin/bash
if [ -n "$1" ]
then
productCount=`cat $1 | cut -c 1-7 | sort | uniq | wc -l`
echo $productCount
else
echo "please supply a filename as parameter"
fi
$ ./productCount.sh BusinessObjects.csv
9

Create name/value pairs based on file output

I'd like to format the output of cat myFile.txt in the form of:
app1=19
app2=7
app3=20
app4=19
Using some combination of piping output through various commands.
What would be easiest way to achieve this?
I've tried using cut -f2 but this does not change the output, which is odd.
Here is the basic command/file output:
[user#hostname ~]$ cat myFile.txt
1402483560882 app1 19
1402483560882 app2 7
1402483560882 app3 20
1402483560882 app4 19
Basing from your input:
awk '{ print $2 "=" $3 }' myFile
Output
app1=19
app2=7
app3=20
app4=19
Another solution, using sed and cut:
cat myFile.txt | sed 's/ \+/=/gp' | cut -f 3- -d '='
Or using tr and cut:
cat myFile.txt | tr -s ' ' '=' | cut -f 3- -d '='
You could try this sed oneliner also,
$ sed 's/^\s*[^ ]*\s\([^ ]*\)\s*\(.*\)$/\1=\2/g' file
app1=19
app2=7
app3=20
app4=19

How do I sed/grep the last word in a filename?

I have a couple of filenames for different languages. I need to grep or sed just the language part. I am using gconftool-2 -R / and want to pipe a command to bring out just the letters with the language.
active = file.so,sv.xml
active = file.so,en_GB.xml
active = file.so,en_US.xml
I need the sv and en_GB part of the file. How can I do that in the most effective way? I am thinking of something like gconftool-2 -R / | sed -n -e '/active =/p̈́' -e '/\.$/' but then I get stuck as I don't know how to print just the part I need and not the whole line.
awk -F. '{print $(NF-1)}'
NF is the number of fields, awk counts from 1 so the 2nd to last field is NF-1.
The -F. says that fields are separated by "." rather than whitespace
How about using simple cut
cut -d. -f3 filename
Test:
[jaypal:~/Temp] cat filename
active = file.so.sv.xml
active = file.so.en_GB.xml
active = file.so.en_US.xml
[jaypal:~/Temp] cut -d. -f3 filename
sv
en_GB
en_US
Based on the updated input:
[jaypal:~/Temp] cat filename
active = file.so,sv.xml
active = file.so,en_GB.xml
active = file.so,en_US.xml
[jaypal:~/Temp] cut -d, -f2 filename | sed 's/\..*//g'
sv
en_GB
en_US
OR
Using awk:
[jaypal:~/Temp] awk -F[,.] '{print $3}' filename
sv
en_GB
en_US
[jaypal:~/Temp] awk -F[,.] '{print $(NF-1)}' filename
sv
en_GB
en_US
OR
Using grep and tr:
[jaypal:~/Temp] egrep -o ",\<.[^\.]*\>" f | tr -d ,
sv
en_GB
en_US
awk would be my main tool for this task but since that has already been proposed, I'll add a solution using cut instead
cut -d. -f3
i.e. use . as delimiter and select the third field.
Since you tagged the question with bash, I'll add a pure bash solution as well:
#!/usr/bin/bash
IFS=.
while read -a LINE;
do
echo ${LINE[2]}
done < file_name
Try:
gconftool-2 -R / | grep '^active = ' | sed 's,\.[^.]\+$,,; s,.*\.,,'
The first sed command says to remove a dot followed by everything not a dot until the end of line; the second one says to remove everything until the last dot.
This might work for you:
gconftool-2 -R / | sed -n 's/^active.*,\([^.]*\).*/\1/p'

Resources