How would I run a unix command in a loop with variable data? - shell

I'd like to run a unix command in a loop, replacing a variable for each iteration and then store the output into a file.
I'll be grabbing the HTTP headers of a series of URL's using curl -I and then I want each instance outputted to a new line of a file.
I know
I could store the output with | cat or redirect it into a file with >, but how would I run the loop?
I have a file with a list of URL's one per line (or I could comma separate them, alternatively).

You can write:
while IFS= read -r url ; do
curl -I "$url"
done < urls-to-query.txt > retrieved-headers.txt
(using the built-in read command, which reads a line from standard input — in this case redirected from urls-to-query.txt — and saves it to a variable — in this case $url).

Given a list of URLs in a file:
http://url1.com
http://url2.com
You could run
cat inputfile | xargs curl -I >> outputfile
That would read each line of the input file and append the results for each row into the outputfile

Related

$ sign in url from text file gives error while download via wget

I want to download files with wget in a bash script. The url's look like this:
https://xxx.blob.core.windows.net/somefolder/$somefile.webm?sv=xxx&sp=r..
Problem is the $ doller sign in the url
When I download the file with double quotes I get a 403 because the $ sign is probably interpreted.
wget "https://xxx.blob.core.windows.net/somefolder/$somefile.webm?sv=xxx&sp=r.."
When I single quote the url and download the file everything goes well:
wget 'https://xxx.blob.core.windows.net/somefolder/$somefile.webm?sv=xxx&sp=r..'
But the url should come from a line in a text file. So I read the file and pass the lines as url:
files=($(< file.txt))
# Read through the url.txt file and execute wget command for every filename
while IFS='=| ' read -r param uri; do
for file in "${files[#]}"; do
wget "${file}"
done
done < file.txt
I get the 403 here as well and don't know how to prevent the termimal from interpreting the dollar sign. How can I achieve this?
But the url should come from a line in a text file.
If you have file with 1 URL per line or are able to easily alter your file to hold 1 URL per line, then you might use -i file option, from wget man page
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to
read from a file literally named -.)
If this function is used, no URLs need be present on the command line.
If there are URLs both on the command line and in an input file, those
on the command lines will be the first ones to be retrieved. If
--force-html is not specified, then file should consist of a series of URLs, one per line.(...)
So if you have single file, say urls.txt you might use it like so
wget -i urls.txt
and if you have few files you might concat them and shove through standard input like so
cat urls1.txt urls2.txt urls3.txt | wget -i -
If file(s) contain additional data then remember to process them so GNU wget will get only URLs.

Sending file contents to another command bash

I have a plain text file with two columns. I need to take each line which contains two columns and send them to a command.
The source file looks like this:
potato potato2
the line needs to be sent to another command so it looks like this
command potato potato2
output I can just have to std out.
Been such a long time that I've tried a simple bash script...
I assume that your file contains two columns per line, separated by either spaces or tabs.
xargs -n 2 command < file.txt
See: man xargs
Looks like you just need to read a file line by line, so the following code should do:
while read -r line
do
echo "$line" | xargs your-other-command #Use xargs to convert input into arguments
done < source-file.txt

How to run a command through a text file with different number of arguments on each line

I have a text file similar to the one below (much longer). I'm trying to do a lookup for each of these IP addresses using the host command. Do you know how I could do this in the order of the text file (entire first line, then the second line, etc.)?
I tried using this, but it did not execute correctly:
while read in; do host "$in"; done < inputfile.txt > outputfile.txt
Input text file:
10.10.999.200 10.11.223.334 10.55.555.555
10.12.238.222 10.52.212.212
10.12.238.222 10.14.217.232
10.23.212.212 10.19.301.305 10.12.345.678
Set the spaces to newlines and pipe each IP to xargs to process.
tr ' ' '\n' < inputfile.txt | xargs -IX host X > outputfile.txt
I would do it this way:
cat file | while read line
do
echo "$line"
done
This way can see line by line. However, if your file is huge, it will take long time because every time you read file in shell, program open, read, close file. In such case you have to use AWK

Search recursively text from each line in file using standard cmd line commands

I have a file with variables names prepared by grep call.
Now I want to do following: grep directory recursively and search in each file each variable entries (from initially prepared file). How could I achieve it via awk/sed or any other console utility? I know how to do it with, for example, python script, but now I'd like to use pure console solution.
I am stuck on applying command to data: awk '{ print $0}' RS="/" settings_vars.txt Is it right? But how to call command instead of print line content?
You can use recursive grep with -f option:
grep -rHf settings_vars.txt .
Options used are:
-f # Read one or more newline separated patterns from file.
-H # Always print filename headers with output lines.
-r # Read all files under each directory, recursively, following symbolic links only
if they are on the command line.

how to send text to a process in a shell script?

So I have a Linux program that runs in a while(true) loop, which waits for user input, process it and print result to stdout.
I want to write a shell script that open this program, feed it lines from a txt file, one line at a time and save the program output for each line to a file.
So I want to know if there is any command for:
- open a program
- send text to a process
- receive output from that program
Many thanks.
It sounds like you want something like this:
cat file | while read line; do
answer=$(echo "$line" | prog)
done
This will run a new instance of prog for each line. The line will be the standard input of prog and the output will be put in the variable answer for your script to further process.
Some people object to the "cat file |" as this creates a process where you don't really need one. You can also use file redirection by putting it after the done:
while read line; do
answer=$(echo "$line" | prog)
done < file
Have you looked at pipes and redirections ? You can use pipes to feed input from one program into another. You can use redirection to send contents of files to programs, and/or write output to files.
I assume you want a script written in bash.
To open a file you just need to type a name of it.
To send a text to a program you either pass it through | or with < (take input from file)
To receive output you use > to redirect output to some file or >> to redirect as well but append the results instead of truncating the file
To achieve what you want in bash, you could write:
#/bin/bash
cat input_file | xargs -l1 -i{} your_program {} >> output_file
This calls your_program for each line from input_file and appends results to output_file

Resources