Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 months ago.
Improve this question
I have a string in shell script:
string1="0101122100635014,TEST123 22 SEP 06 PQR BC,14,25,0.05,,0915-1530|1815-1915:17,2022-09-30,1665066600,ABC:TEST123629500AB,10,11,90014,TEST123,26009,29500.0,BC"
I want to extract ABC:TEST123629500AB in shell scripting.
echo $string1 | magical command
output: ABC:TEST123629500AB
echo "$string1" | cut -d',' -f10
cut will give you part of string.
-d define the separator.
-f Specifies the column you want based on the separator
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 days ago.
Improve this question
Domain name:
jewelleryfurkeeps.co.uk
Data validation:
Nominet was able to match the registrant's name and address against a 3rd party data source on 30-Nov-2022
Registrar:
Namecheap, Inc. [Tag = NAMECHEAP-INC]
URL: https://www.namecheap.com
what I want
Domain name:
jewelleryfurkeeps.co.uk
from above this I want only > jewelleryfurkeeps.co.uk
With perl:
$ perl -0nE 'say $1 if /Domain name.*?URL:\s+(\S+)/s' file
With awk:
$ awk '/^Domain name/{p=1;next} /^$/{exit} p{gsub(/ /, "");print}' file
With grep:
grep -A1 '^Domain name' file | tail -n1 | tr -d ' '
Output
jewelleryfurkeeps.co.uk
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have an out put in the below pattern
["snaptuda-shv-22-lla1.example.com","snaptuza-shv-22-lla1.example.com","snaptuservice-proxy-shv-22-lla1.example.com"]
I used below command to strip the domains within the double quotes
cut -d"\"" -f2 file.txt
I got only the first domain , which was
snaptuda-shv-22-lla1.example.com
What I need is all domains till the end of the file , how can I achieve this ?
You input is json. For parsing json there is jq:
jq -r '.[]' filename
Or if the input comes from stdout, like this:
echo '["snaptuda-shv-22-lla1.example.com",...]' | jq -r '.[]'
snaptuda-shv-22-lla1.example.com
snaptuza-shv-22-lla1.example.com
snaptuservice-proxy-shv-22-lla1.example.com
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am using
sed -s n v
Nothing works for me
The -i flag is only in GNU Sed.
cat file | tr ']' '[' > temp
mv temp file
The above should work for you.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have two tab deliminated files
File1.tab
100 ABC
300 CDE
File2.tab
399 GSA
300 CDE
I want awk command to return 1 because row '300 CDE' is common in both file.
I almost hate to encourage laziness by answering a question with so little effort put into it, but did you try grep?
$: grep -c -f File1.tab File2.tab
1
If lines are unique per file you can use grep
grep -f File1.tab File2.tab | wc -l
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have an output from a program that I would like to process and If I pipe it to a file I get:
file/path#backup2018
file2/path/more/path/path#backup2019
file3/path#backup2017
And I want to process it so it looks like this:
file/path file.path
file2/path/more/path/path file.path.more.path.path
file3/path file.path
I have figured out how to make it with separate commands but would like a one liner.
$ awk -F# '{s=$1; gsub("/", ".", s); print $1, s}' file | column -t
file/path file.path
file2/path/more/path/path file2.path.more.path.path
file3/path file3.path
using sed
sed 's/\([^#]*\)#.*/\1 \1/g' file|column -t