How to truncate trailing space in xargs - bash

I would like to use xargs to list the contents of some files based on the output of command A. Xargs replace-str seem to be adding a space to the end and causing the command to fail. Any suggestions? I know this can be worked around using for loop. But curious to know how to do this using xargs.
lsscsi |awk -F\/ '/ATA/ {print $NF}' | xargs -L 1 -I % cat /sys/block/%/queue/scheduler
cat: /sys/block/sda /queue/scheduler: No such file or directory

The problem is not with xargs -I, which does not append a space to each argument, which can be verified as follows:
$ echo 'sda' | xargs -I % echo '[%]'
[sda]
Incidentally, specifying -L 1 in addition to -I is pointless: -I implies line-by-line processing.
Therefore, it must be the output from the command that provides input to xargs that contains the trailing space.
You can adapt your awk command to fix that:
lsscsi |
awk -F/ '/ATA/ {sub(/ $/,"", $NF); print $NF}' |
xargs -I % cat '/sys/block/%/queue/scheduler'
sub(/ $/,"", $NF) replaces a trailing space in field $NF with the empty string, thereby effectively removing it.
Note how I've (single-)quoted cat's argument so as to make it work even with filenames with spaces.

lsscsi |awk -F\/ '/ATA/ {print $NF}'| awk '{print $NF}' | xargs -L 1 -I % cat /sys/block/%/queue/scheduler
The first awk stmt splits by "/" so anything else is considered as field. In this is case "sda " becomes whole field including a space at the end. But by default, awk removes space . So after the pipe, the second awk prints $NF (which is last word of the line) and leaves out " " space as delimiter. awk { print $1 } will do the same because we have only one word, "sda" which is both first and last.

Related

How to do text processing using awk to cut last field in a line?

I am having this scenario and need if I can improvise the awk output.
cat example.txt
"id": "/subscriptions/fbfa3437-c63c-4ed7-b9d3-fe595221950d/resourceGroups/rg-ooty/providers/Microsoft.Compute/virtualMachines/fb11b768-4d9f-4e83-b7dc-ee677f496fc9",
"id": "/subscriptions/fbfa3437-c63c-4ed7-b9d3-fe595221950d/resourceGroups/rg-ooty/providers/Microsoft.Compute/virtualMachines/fbee83e8-a84a-4b22-8197-fc9cc924801f",
"id": "/subscriptions/fbfa3437-c63c-4ed7-b9d3-fe595221950d/resourceGroups/rg-ooty/providers/Microsoft.Compute/virtualMachines/fc224f83-57f4-41eb-aee3-78f18d055704",
I am looking to cut the pattern after /virtualMachines/
Hence, used the below awk command to get the output.
cat example.txt | awk '{print $2}' | awk -F"/" '{print $(NF)}' | awk -F'",' '{print $1}'
fb11b768-4d9f-4e83-b7dc-ee677f496fc9
fbee83e8-a84a-4b22-8197-fc9cc924801f
fc224f83-57f4-41eb-aee3-78f18d055704
Is there any way I can use some options like 'getline' or multiple awk options in single awk execution or better ways to improve the command to get the output?
Please suggest.
Use " and / as field separators and print second last field:
awk -F '["/]' '{print $(NF-1)}' file
Output:
fb11b768-4d9f-4e83-b7dc-ee677f496fc9
fbee83e8-a84a-4b22-8197-fc9cc924801f
fc224f83-57f4-41eb-aee3-78f18d055704
If the spacing of example.txt is as consistent as it seems, then it's simpler to use cut with the -characters count option:
cut -c 127-162 example.txt
Output:
fb11b768-4d9f-4e83-b7dc-ee677f496fc9
fbee83e8-a84a-4b22-8197-fc9cc924801f
fc224f83-57f4-41eb-aee3-78f18d055704
You could also use sed for this:
sed 's#.*/\([^/]*\)",#\1#' example.txt
Matches anything .* forwardslash / then captures \( any number of non-forwardslash characters [^/]*, ends the capture \) followed by a quote & comma to end ",, and replaces this with the captured group (anything between the forwardslash and the ", at the end.

awk to ignore leading and trailing space and blank lines and commented lines if any from a file

Need help on awk
awk to ignore leading and trailing space and blank lines and commented lines if any from a file
Here you go,
grep "MyText" FromMyLog.log |awk -F " " '{print $2}'|awk -F "#" '{print $1}'
Here MyText is the key to grep from file FromMyLog.log
-F is used to avoid the following value, here space between quotes.
'{print $2}' will print the 2nd argument from the output, you can use $1, $2 as your requirement.
awk -F "#" This will ignore the commented lines.
This is just a hint for you, Modify the code with your requirements. This works for me while grep.
grep -v '^$\|^\s*\#' <filename> or egrep -v '^[[:space:]]*$|^ *#' <file_name> (if white spaces)
I think this is what you were asking for:
$> echo -e ' abc \t
\t efg
# alskdjfl
#
awk
# askdfh
' |
awk '
# match if first none space character is not a hash sign
/^[[:space:]]*[^#]/ {
# delete any spaces from start and end of line
sub(/^[[:space:]]*/, "");
sub(/[[:space:]]*$/, "", NF); # `NF` is Number of Fields
print
}'
abc
efg
awk
This can be folded onto a single line if so needed. Any problems, an actual example of the input (in a code block in your question) would be helpful.
Here's one way to extract required content ignoring spaces
FILE CONTENT
Server: 192.168.XX.XX
Address 1: 192.168.YY.YY
Name: central.google.com
Now to extract the server's address without spaces.
COMMAND
awk -F':' '/Server/ '{print $2}' YOURFILENAME | tr -s " "
option -s for squeezing the repetition of spaces.
which gives,
192.168.XX.XX
Here, notice that there is one leading space in the address.
To completely ignore spaces you can change that to,
awk -F':' '/Server/ '{print $2}' YOURFILENAME | tr -d [:space:]
option -d for removing particular characters, which is [:space:] here.
which gives,
192.168.YY.YY
without any leading or trailing spaces.
tr is an UNIX utility for translating, or deleting, or squeezing repeated characters. tr refers to translate here.
Examples:
tr [:lower:] [:upper:]
gives,
YOUAREAWESOME
for
youareawesome
Hope that helps.

Count number of Special Character in Unix Shell

I have a delimited file that is separated by octal \036 or Hexadecimal value 1e.
I need to count the number of delimiters on each line using a bash shell script.
I was trying to use awk, not sure if this is the best way.
Sample Input (| is a representation of \036)
Example|Running|123|
Expected output:
3
awk -F'|' '{print NF-1}' file
Change | to whatever separator you like. If your file can have empty lines then you need to tweak it to:
awk -F'|' '{print (NF ? NF-1 : 0)}' file
You can try
awk '{print gsub(/\|/,"")}'
Simply try
awk -F"|" '{print substr($3,length($3))}' OFS="|" Input_file
Explanation: Making field separator -F as | and then printing the 3rd column by doing $3 only as per your need. Then setting OFS(output field separator) to |. Finally mentioning Input_file name here.
This will work as far as I know
echo "Example|Running|123|" | tr -cd '|' | wc -c
Output
3
This should work for you:
awk -F '\036' '{print NF-1}' file
3
-F '\036' sets input field delimiter as octal value 036
Awk may not be the best tool for this. Gnu grep has a cool -o option that prints each matching pattern on a separate line. You can then count how many matching lines are generated for each input line, and that's the count of your delimiters. E.g. (where ^^ in the file is actually hex 1e)
$ cat -v i
a^^b^^c
d^^e^^f^^g
$ grep -n -o $'\x1e' i | uniq -c
2 1:
3 2:
if you remove the uniq -c you can see how it's working. You'll get "1" printed twice because there are two matching patterns on the first line. Or try it with some regular ascii characters and it becomes clearer what the -o and -n options are doing.
If you want to print the line number followed by the field count for that line, I'd do something like:
$grep -n -o $'\x1e' i | tr -d ':' | uniq -c | awk '{print $2 " " $1}'
1 2
2 3
This assumes that every line in the file contains at least one delimiter. If that's not the case, here's another approach that's probably faster too:
$ tr -d -c $'\x1e\n' < i | awk '{print length}'
2
3
0
0
0
This uses tr to delete (-d) all characters that are not (-c) 1e or \n. It then pipes that stream of data to awk which just counts how many characters are left on each line. If you want the line number, add " | cat -n" to the end.

Greping asterisk through bash

I am validating few columns in a pipe delimited file. My second column is defaulted with '*'.
E.g. data of file to be validated:
abc|* |123
def|** |456
ghi|* |789
2nd record has 2 stars due to erroneous data.
I teied it as:
Value_to_match="*"
unmatch_count=cat <filename>| cut -d'|' -f2 | awk '{$1=$1};1' | grep -vw "$Value_to_match" | sort -n | uniq | wc -l
echo "unmatch_count"
This gives me count as 0 whereas I am expecting 1 (for **) as I have used -w with grep which is exact match and -v which is invert match.
How can I grep **?
The problem here is grep considering ** a regular expression. To prevent this, use -F to use fixed strings:
grep -F '**' file
However, you have an unnecessarily big set of piped operations, while awk alone can handle it quite well.
If you want to check lines containing ** in the second column, say:
$ awk -F"|" '$2 ~ /\*\*/' file
def|** |456
If you want to count how many of such lines you have, say:
$ awk -F"|" '$2 ~ /\*\*/ {sum++} END {print sum}' file
1
Note the usage of awk:
-F"|" to set the field separator to |.
$2 ~ /\*\*/ to say: hey, in every line check if the second field contains two asterisks (remember we sliced lines by |). We are escaping the * because it has a special meaning as a regular expression.
If you want to output those lines that have just one asterisk as second field, say:
$ awk -F"|" '$2 ~ /^*\s*$/' file
abc|* |123
ghi|* |789
Or check for those not matching this regex with !~:
$ awk -F"|" '$2 !~ /^*\s*$/' a
def|** |456

Bash: How can I read from file and display the output in one line?

I'd like to run:
grep nfs /etc/fstab | awk '{print $2}'
[root#nyproxy5 ~]# grep nfs /etc/fstab | awk '{print $2}'
/proxy_logs
/proxy_dump
/sync_logs
[root#nyproxy5 ~]#
And to get the output in one line delimited by space.
How can I do that?
If you don't mind a space (and no newline) at the end, you could do use this awk script:
awk '/nfs/{printf "%s ", $2}' /etc/fstab
For lines that match the pattern /nfs/, the second column is printed followed by a space. As a general rule, piping grep into awk is unnecessary as awk can do the pattern matching itself.
If you would like a newline at the end, you could use the END block:
awk '/nfs/{printf "%s ", $2}END{print ""}' /etc/fstab
This prints an empty string, followed by the output record separator (which is a newline). This will mean that you always have a newline in the output even if no matching records were found. If that's a problem, you could use a flag:
awk '/nfs/{f=1;printf "%s ", $2}END{if(f)print ""}' /etc/fstab
The flag f is set to true if the pattern is ever matched, causing the newline to be printed.
Newlines or any other character could be removed or replaced with tr command:
grep nfs /etc/fstab | awk '{print $2}' | tr -c '\n' ' '
If you want to get rid of tabs also:
| tr -c '\t' ' '

Resources