Can you use the paste command with commands and not files? [duplicate] - bash

This question already has an answer here:
Output multiple commands into columns
(1 answer)
Closed 2 years ago.
I'm using Git Bash.
I have a few log files. I want to get a nice list of date & time stamps. The file names start with a 4 digit date. Each line item in the files has the time.
I can run the commands separately to put the data into two files, and then mash up the files with the paste command. That works.
So my question is this: can I use commands instead of files within the paste command?
example:
instead of paste file1 file2, I want to use paste (command1) (command2). Is this possible?
.
I tried grouping the commands like this :
paste (grep -F -e <string> <files> | cut -c1-4) (awk '/\-/ {print $1, $2}' <files>)
I got the error "syntax error near unexpected token grep"
So then I tried using command substitution:
paste $(grep -F -e <string> <files> | cut -c1-4) $(awk '/\-/ {print $1, $2}' <files>)
But unfortunately it didn't like this either. Anybody know what I'm missing here?

Thanks to commenter, found the answer here:
Output multiple commands into columns
Corrected code (that works) is:
paste <(grep -F -e <string> <files> | cut -c1-4) <(awk '/\-/ {print $1, $2}' <files>)
The difference being that the substituted commands are preceded by a '<' and not '$'.

Related

Using wget along with awk to download files [duplicate]

This question already has answers here:
How to pass command output as multiple arguments to another command
(5 answers)
Closed 1 year ago.
I have a csv called test.csv that contains urls to be dowloaded in its first column. I want to download the urls using wget. How can I do this in shell?
I have used the command below but no success:
awk -F ',' '{print $1}' test.csv | wget -P download_dir
I don't think you can pipe the filenames into wget, but you can run the command for each item with the filename appended to the end
try this and review the list of commands it will run
awk -F ',' '{print $1}' test.csv | xargs -n1 echo wget -P download_dir
then remove echo and run it again, and it'll execute the commands instead of printing them

Extract specific string from line with standard grep,egrep or awk

i'm trying to extract a specific string from a grep output
uci show minidlna
produces a large list
.
.
.
minidlna.config.enabled='1'
minidlna.config.db_dir='/mnt/sda1/usb/db'
minidlna.config.enable_tivo='1'
minidlna.config.wide_links='1'
.
.
.
so i tried to narrow down what i wanted by running
uci show minidlna | grep -oE '\bdb_dir=\S+'
this narrows the output to
db_dir='/mnt/sda1/usb/db'
what i want is to output only
/mnt/sda1/usb/db
without the quotes and without the starting "db_dir" so i can run rm /mnt/sda1/usb/db/file.db
i've used the answers found here
How to extract string following a pattern with grep, regex or perl
and that's as close as i got.
EDIT: after using Ed Morton's awk command i needed to pass the output to rm command.
i used:
| ( read DB; (rm $DB/files.db) .
read DB passes the output into the vairable DB.
(...) combines commands.
rm $DB/files.db deletes the the file files.db.
Is this what you're trying to do?
$ awk -F"'" '/db_dir/{print $2}' file
/mnt/sda1/usb/db
That will work in any awk in any shell on every UNIX box.
If that's not what you want then edit your question to clarify your requirements and post more truly representative sample input/output.
Using sed with some effort to avoid single quotes:
sed -n 's/^minidlna.config.db_dir=\s*\S\(\S*\)\S\s*$/\1/p' input
Well, so you end up having a string like db_dir='/mnt/sda1/usb/db'.
I would first remove the quotes by piping this to
.... | tr -d "'"
Now you end up with a string like db_dir=/mnt/sda1/usb/db.
Say you have this string stored in a variable named confstr, then
${confstr##*=}
gives you just /mnt/sda1/usb/db, since *= denotes everything from the start to the equal sign, and ## denotes removal.
I would do this:
Once you either extracted your line about into file.txt (or pipe it into this command), split the fields using the quote character. Use printf to generate the rm command and pass this into bash to execute.
$ awk -F"'" '{printf "rm %s.db/file.db\n", $2}' file.txt | bash
rm: /mnt/sda1/usb/db.db/file.db: No such file or directory
With your original command:
$ uci show minidlna | grep -oE '\bdb_dir=\S+' | \
awk -F"'" '{printf "rm %s.db/file.db\n", $2}' | bash

Get the extension of file [duplicate]

This question already has answers here:
Extract filename and extension in Bash
(38 answers)
Closed 7 years ago.
I have files with "multiple" extension , for better manipulation I would like to create new folder for each last extension but first I need to retrieve the last extension.
Just for example lets assume i have file called info.tar.tbz2 how could I get "tbz2" ?
One way that comes to my mind is using cut -d "." but in this case I would need to specify -f parameter of the last column, which I don't know how to achieve.
What is the fastest way to do it?
You may use awk,
awk -F. '{print $NF}' file
or
sed,
$ echo 'info.tar.tbz2' | awk -F. '{print $NF}'
tbz2
$ echo 'info.tar.tbz2' | sed 's/.*\.//'
tbz2

Find string from a file to another file in shell script

I am new to shell scripting. Just wanna know how can I obtain the result I wanted with the following:
I have two files (FILE_A and FILE_B)
FILE_A contains:
09228606355,71295939,1,http://sun.net.ph/043xafj.xml,01000001C123000D30
09228505450,71295857,1,http://sun.net.ph/004xafk.xml,01000001C123000D30
FILE_B contains:
http://sun.net.ph/161ybfq.xml ,9220002354016,93111
http://sun.net.ph/004xafk.xml ,9220002354074,93111
If the URL (4th field) in FILE_A is present in FILE_B, the out will be:
09228505450,71295857,1,http://sun.net.ph/004xafk.xml,01000001C123000D30,9220002354074,93111
It will display the whole line in FILE_A and added 2nd and 3rd field of FILE_B.
I hope my question is clear. Thank you.
This might work for you (GNU sed):
sed -r 's/^\s*(\S+)\s*,(.*)/\\#^([^,]*,){3}\1#s#$#,\2#p/' fileB | sed -nrf - fileA
This builds a sed script from fileB and runs it against fileA. The second sed script is run in silent mode and only those lines that match the sed script are printed out.
Try this:
paste -d , A B | awk -F , '{if ($4==$6) print "match", $1,$2,$3,$4,$5,$7,$8;}'
I removed the spaces in your file B for the $4==$6 to work.
I use paste to create a composite line using , as the delimiter to get a line with , . I then use awk comparison to check the URLs from both files and if a match is found I print all the fields you care about.

Bash Script Write File Out Depending on Contents

I have a file and it's in the form:
Thread1.Action1, wahhhhhh
Thread1.Action1, blahhhhhh
Thread1.Action2, wooooooo
Thread1.Action2, weeeeeee
Thread1.Action2, baaaaaaa
Thread2.Action1, mooooooo
Thread2.Action2, wooooooof
What I need to do is:
Write a file, where the filename is the first bit before the comma. This file should then contain all lines associated with it. e.g: There should be 4 files in this case: Thread1.Action1.out, Thread1.Action2.out, Thread2.Action1.out and Thread2.Action2.out
For example, Thread1.Action2.out should contain:
Thread1.Action2, wooooooo
Thread1.Action2, weeeeeee
Thread1.Action2, baaaaaaa
Thread2.Action1.out should contain:
Thread2.Action1, mooooooo
etc..
Note: I want it to be agnostic to the name of the first column - e.g: I won't necessarily know what the data is in the first column before executing the script, but there will be groups of it...
I want to write a bash script that will be able to do this. I've tried doing bits of it in awk, but it's getting very messy.
Any help?
awk -F, '{print > $1".out"}' your_file
For your comment:exceute the below command for creating all the directories at the same time:
awk -F. '{print $1}' your_file | sort -u | xargs mkdir
Now execute this command:
awk -F, '{split($1,a,".");print >a[1]"/"$1".out"}' your_file

Resources