Cut all till the end [closed] - bash

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have an out put in the below pattern
["snaptuda-shv-22-lla1.example.com","snaptuza-shv-22-lla1.example.com","snaptuservice-proxy-shv-22-lla1.example.com"]
I used below command to strip the domains within the double quotes
cut -d"\"" -f2 file.txt
I got only the first domain , which was
snaptuda-shv-22-lla1.example.com
What I need is all domains till the end of the file , how can I achieve this ?

You input is json. For parsing json there is jq:
jq -r '.[]' filename
Or if the input comes from stdout, like this:
echo '["snaptuda-shv-22-lla1.example.com",...]' | jq -r '.[]'
snaptuda-shv-22-lla1.example.com
snaptuza-shv-22-lla1.example.com
snaptuservice-proxy-shv-22-lla1.example.com

Related

how to cut specific word from a string in BASH [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 months ago.
Improve this question
I have a string in shell script:
string1="0101122100635014,TEST123 22 SEP 06 PQR BC,14,25,0.05,,0915-1530|1815-1915:17,2022-09-30,1665066600,ABC:TEST123629500AB,10,11,90014,TEST123,26009,29500.0,BC"
I want to extract ABC:TEST123629500AB in shell scripting.
echo $string1 | magical command
output: ABC:TEST123629500AB
echo "$string1" | cut -d',' -f10
cut will give you part of string.
-d define the separator.
-f Specifies the column you want based on the separator

bash shell execution [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am using
sed -s n v
Nothing works for me
The -i flag is only in GNU Sed.
cat file | tr ']' '[' > temp
mv temp file
The above should work for you.

Replace and remove characters in string, and add output as new column [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have an output from a program that I would like to process and If I pipe it to a file I get:
file/path#backup2018
file2/path/more/path/path#backup2019
file3/path#backup2017
And I want to process it so it looks like this:
file/path file.path
file2/path/more/path/path file.path.more.path.path
file3/path file.path
I have figured out how to make it with separate commands but would like a one liner.
$ awk -F# '{s=$1; gsub("/", ".", s); print $1, s}' file | column -t
file/path file.path
file2/path/more/path/path file2.path.more.path.path
file3/path file3.path
using sed
sed 's/\([^#]*\)#.*/\1 \1/g' file|column -t

How can I remove digits from these strings? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a text file containing a few string values:
PIZZA_123
CHEESE_PIZZA_785
CHEESE_PANEER_PIZZA_256
I need to remove the numeric values in these values and need the following output. The tricky part for me is that these numeric values are random every time. I need to remove these numeric values and write the string values alone to a file.
CHEESE_PIZZA
CHEESE_PANEER_PIZZA
What is an easy way to do this?
sed 's/_[0-9]*$//' file > file2
Will do it.
There's more than one way to do it. For example, since the numbers always seem to be in the last field, we can just cut off the last field with a little help from the rev util. Suppose the input is pizza.txt:
rev pizza.txt | cut -d _ -f 2- | rev
Since this uses two utils and two pipes, it's not more efficient than sed. The sole advantage for students is that regex isn't necessary -- the only text needed is the _ as a field separator.
You can use the below script for this.
#!/bin/bash
V1=PIZZA_123
V2=CHEESE_PIZZA_785
V3=CHEESE_PANEER_PIZZA_256
IFS=0123456789
echo $V1>tem.txt
echo $V2>>tem.txt
echo $V3>>tem.txt
echo "here are the values:"
sed 's/...$//' tem.txt
rm -rf tem.txt

Handling a special case during tail -f logging [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am tailing the logs to find if there is any Exception as show below
tail -f flexi.log | grep "Exception" --color
This works fine , but unfortanely i dont want to log it incase if there s any DataNotAvailableException .
The DataNotAvailableException comes frequently and i dont want to log that .
Is this possible ??
Just add another grep to search and remove the DataNotAvailables.
tail -f flexi.log | grep "Exception" --color | grep -v "DataNotAvailableException"

Resources