Using wget along with awk to download files [duplicate] - bash

This question already has answers here:
How to pass command output as multiple arguments to another command
(5 answers)
Closed 1 year ago.
I have a csv called test.csv that contains urls to be dowloaded in its first column. I want to download the urls using wget. How can I do this in shell?
I have used the command below but no success:
awk -F ',' '{print $1}' test.csv | wget -P download_dir

I don't think you can pipe the filenames into wget, but you can run the command for each item with the filename appended to the end
try this and review the list of commands it will run
awk -F ',' '{print $1}' test.csv | xargs -n1 echo wget -P download_dir
then remove echo and run it again, and it'll execute the commands instead of printing them

Related

How to manipulate a string with the grep command? [duplicate]

This question already has answers here:
How can I remove the extension of a filename in a shell script?
(15 answers)
Closed 11 months ago.
I have a filename with the format yyyymmdd.txt. How can I output only yyyymmdd without the .txt extension?
Example
20220414.txt (before output)
20220414 (after the output)
basename has an option to remove a suffix:
basename -s .txt 20220414.txt
gives:
20220414
Or, if your filename is stored in a variable, bash can help:
a=20220414.txt
echo ${a%.*}
gives:
20220414
You can user awk with flag -F to specify the separator . and then print the first part with $1
echo "20220414.txt" | awk -F "." ' {print $1}'
output
20220414
grep doesn't manipulate anything, it shows what you have in a file. So, you can't modify that file using grep, but you can modify what it shows, using the -o switch, as you can see here:
Prompt> echo "20220414.txt" | grep -o "[0-9]*"
20220414
The [0-9]* means "a list of integers, going from character '0' to character '9'.

Can you use the paste command with commands and not files? [duplicate]

This question already has an answer here:
Output multiple commands into columns
(1 answer)
Closed 2 years ago.
I'm using Git Bash.
I have a few log files. I want to get a nice list of date & time stamps. The file names start with a 4 digit date. Each line item in the files has the time.
I can run the commands separately to put the data into two files, and then mash up the files with the paste command. That works.
So my question is this: can I use commands instead of files within the paste command?
example:
instead of paste file1 file2, I want to use paste (command1) (command2). Is this possible?
.
I tried grouping the commands like this :
paste (grep -F -e <string> <files> | cut -c1-4) (awk '/\-/ {print $1, $2}' <files>)
I got the error "syntax error near unexpected token grep"
So then I tried using command substitution:
paste $(grep -F -e <string> <files> | cut -c1-4) $(awk '/\-/ {print $1, $2}' <files>)
But unfortunately it didn't like this either. Anybody know what I'm missing here?
Thanks to commenter, found the answer here:
Output multiple commands into columns
Corrected code (that works) is:
paste <(grep -F -e <string> <files> | cut -c1-4) <(awk '/\-/ {print $1, $2}' <files>)
The difference being that the substituted commands are preceded by a '<' and not '$'.

UNIX command and output in a single delimited file [duplicate]

This question already has answers here:
UNIX shell script to run a list of grep commands from a file and getting result in a single delimited file
(4 answers)
Closed 8 years ago.
Is there a way to get the command that is being executed and the result of the command in the same output file?
i want a output file
command1 'some kind of delimiter' output 'New line'
command2 'some kind of delimiter' output 'New line'
command3 'some kind of delimiter' output 'New line'
is this possible ???
please guide me on achiving this ...
I am not sure what your command is but here's the sample i have taken to be ls command just for illustration purpose.
ls script.sh >> temp; echo $(history | tail -2 | head -1 | awk '{print $2}' && cat temp) && rm -rf temp
What it does is:
command ls redirects output to a temp file (remember i don't know your command and ls is just a sample so you have output stored in temp file)
second command - just grabs your last command that you executed in this case it would be ls and output of the command that we stored
and finally you delete the file that you just created.

bash, execute "edmEventSize" command but it is not found when i tyoe bash script.sh

i hava a file in which using the command "edmEventSize" i can
extract a piece of information of that file (it is a number)
but know i have 700 files on which i have to execute that command
and i am trying to do it on a bash script but i cannot event do it for just
one file since i get "edmEventSize command not found", i already look for
more information but since i am new at bash i can not solve this task
Thank you in advanced
this is my script
#/usr/bin/env sh
for i in {1..700};
do
FILE="Py6_BstoJpsiKs0_7TeV_RECO_Run-0${i}.root"
edmEventSize... $FILE.root > salida${i}.log
done
head *.log | grep "^File" | cut -f4 > a.txt
rm *.log
As everyone would suggest, you can simplify your script like this:
#/bin/bash
for i in {1..700}; do
FILE="Py6_BstoJpsiKs0_7TeV_RECO_Run-0${i}.root"
/path/to/EdmEventSize "$FILE.root"
done | awk -F $'\t' '/^File/{print $4}' > a.txt
If your files actually are in the format of Py6_BstoJpsiKs0_7TeV_RECO_Run-####.root maybe the command you really need is:
printf -v FILE 'Py6_BstoJpsiKs0_7TeV_RECO_Run-%04d.root' "$i"

How to cat a file and create a new file with the same name without creating a new one file | Unix Korn shell [duplicate]

This question already has answers here:
How can I use a file in a command and redirect output to the same file without truncating it?
(14 answers)
Redirect output from sed 's/c/d/' myFile to myFile
(10 answers)
Using the same file for stdin and stdout with redirection
(3 answers)
How to redirect and replace the input file with the output (don't erase myfile when doing "cat myfile > myfile")
(3 answers)
Why piping to the same file doesn't work on some platforms?
(5 answers)
Closed 5 years ago.
There's a way to cat a file, filter it by something and then output a new file with the same name? I'm doing that and I'm getting an empty file, but if I create it with different file name is working. I don't want to create a new file.
Example:
File="My_test_file.txt"
cat ${File} | grep -v "test" > ${File}
in that way is not working, I have to create another file to make it work, as follow:
File="My_test_file.txt"
cat ${File} | grep -v "test" > ${File}.tmp
any idea?
There's a package called moreutils that contains the tool sponge for this exact purpose:
grep -v test foo.txt | sponge foo.txt
If installing tools is not an option, you can implement a naive version that first reads all data into memory, and then finally writes it out:
#!/bin/sh
sponge() (
var="$(cat; printf x)"
printf '%s' "${var%x}" > "$1"
)
grep -v test foo.txt | sponge foo.txt
What you are attempting to do with cat and grep -v can be easily done using sed -i '/pattern/d' and that allows to save changes inline as well:
sed -i.bak '/test/d' "$file"

Resources