I've seen the technique before, but don't know what it's called and forget the exact syntax. Let's say I need to pipe in a file to a program like: command < input-file. However, I want to directly pass these lines of the input file into the command without the intermediate input file. It looks something like this, but it doesn't work:
command < $(file-line1; file-line2; file-line3)
Can someone tell me what this is called and how to do it?
This is called Process Substitution
command < <(printf "%s\n" "file-line1" "file-line2" "file-line3")
With the above, command will think its being input a file with a name much like /dev/fd/XX where 'XX' is some number. As you mentioned, this is a temporary file (actually a file descriptor) but it will contain the 3 lines you passed in to the printf command.
Herestring.
command <<< $'line 1\nline 2\nline 3\n'
Or heredoc.
command << EOF
line 1
line 2
line 3
EOF
I think you are referring to a "here document". Like this
#!/bin/sh
cat <<EOF
This is
the
lines.
EOF
How about:
cat myfile.txt | head -n3 | command
Related
I have a plain text file with two columns. I need to take each line which contains two columns and send them to a command.
The source file looks like this:
potato potato2
the line needs to be sent to another command so it looks like this
command potato potato2
output I can just have to std out.
Been such a long time that I've tried a simple bash script...
I assume that your file contains two columns per line, separated by either spaces or tabs.
xargs -n 2 command < file.txt
See: man xargs
Looks like you just need to read a file line by line, so the following code should do:
while read -r line
do
echo "$line" | xargs your-other-command #Use xargs to convert input into arguments
done < source-file.txt
Goal: using an input file with a list of file names, get the first 5 lines of each file and output to another file. Basically, I'm trying to find out what each program does by reading the header.
Shell: Ksh
Input: myfile.txt
tmp/file1.txt
tmp/file2.txt
Output:
tmp/file1.txt - "Creates web login screen"
tmp/file2.txt - "Updates user login"
I can use "head -5" but not sure how to get the input from the file. I'm assuming I could redirect (>> output.txt)the output for my output file.
Input file names use a relative path.
Update: I created a script below but I'm getting "syntax error: unexpected end of file". The script was created with VI.
#! /bin/sh
cat $HOME/jmarti20.list | while read line
do
#echo $line" >> jmarti20.txt
head -n 5 /los_prod/$line >> $HOME/jmarti20.txt
done
Right, you can append output with >> to a file.
head -n 5 file1.txt >> file_descriptions.txt
You can also use sed to print lines, from documentation at pinfo sed.
sed 5q file1.txt >> file_descriptions.txt
Personal preference is to put file description in line 3, and only print line 3 of files.
sed -n 3p file1.txt >> file_descriptions.txt
The reasoning for using line 3 has to do with the first line often containing a "shebang" like #!/bin/bash, and the 2nd line having localization strings, such as # -*- coding: UTF-8 -*-, to allow proper display of extra character glyphs and languages in terminals and text editors that support them.
Below is what I came up with and seems to work fairly well:
#! /bin/sh
cat $HOME/jmarti20.list | while read line
do
temp=$line
temp2=$(head -n 5 /los_prod/$line)
echo "$temp" "$temp2" >> jmarti20.txt
#echo "$line" >> jmarti20.txt
#head -n 5 /los_prod/$line >> $HOME/jmarti20.txt
done
I have a simple script to read the content of a input file line by line with some extra strings added. Here is one example.
input file: input.txt
content of this input.txt
aaaaaa
bbbbbb
cccccc
dddddd
eeeeee
code to read the file
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "abc.def.hig.ewe.adg.hea.L_${line}.great"
done < "$1"
I'm not sure where exactly it is wrong; I cannot get correct output. It looks like when you add .great at the end of a variable, the output will mess up the sequence.
Transferring comments into an answer.
What output do you get? What output do you expect? I got five lines similar to:
abc.def.hig.ewe.adg.hea.L_aaaaaa.great
using Bash 3.2.57 and Bash 4.3 (on Mac OS X 10.11.5).
Is the data file from Windows or some other source that uses CRLF line endings? That will make things look like:
.greatf.hig.ewe.adg.hea.L_aaaaaa
instead of what I showed before. And, I note, this would have been readily explicable (or discountable) if you'd only showed the output you were getting.
Yes, you are right. The problem is the input file. It got corrupted when uploading from Windows to Unix.
Have you considered using something like awk? It is very good for these types of simple tasks.
$ cat input.txt
aaaa
bbbb
cccc
-
$ awk '{print "prefix_"$0"_suffix"}' input.txt
prefix_aaaa_suffix
prefix_bbbb_suffix
prefix_cccc_suffix
I am looking for a bash one-liner that duplicates stdin to stdout without interleaving. The only solution I have found so far is to use tee, but that does produced interleaved output. What do I mean by this:
If e.g. a file f reads
a
b
I would like to execute
cat f | HERE_BE_COMMAND
to obtain
a
b
a
b
If I use tee - as the command, the output typically looks something like
a
a
b
b
Any suggestions for a clean solution?
Clarification
The cat f command is just an example of where the input can come from. In reality, it is a command that can (should) only be executed once. I also want to refrain from using temporary files, as the processed data is sort of sensitive and temporary files are always error-prone when the executed command gets interrupted. Furthermore, I am not interested in a solution that involves additional scripts (as stated above, it should be a one-liner) or preparatory commands that need to be executed prior to the actual duplication command.
Solution 1:
<command_which_produces_output> | { a="$(</dev/stdin)"; echo "$a"; echo "$a"; }
In this way, you're saving the content from the standard input in a (choose a better name please), and then echo'ing twice.
Notice $(</dev/stdin) is a similar but more efficient way to do $(cat /dev/stdin).
Solution 2:
Use tee in the following way:
<command_which_produces_output> | tee >(echo "$(</dev/stdin)")
Here, you're firstly writing to the standard output (that's what tee does), and also writing to a FIFO file created by process substitution:
>(echo "$(</dev/stdin)")
See for example the file it creates in my system:
$ echo >(echo "$(</dev/stdin)")
/dev/fd/63
Now, the echo "$(</dev/stdin)" part is just the way I found to firstly read the entire file before printing it. It echo'es the content read from the process substitution's standard input, but once all the input is read (not like cat that prints line by line).
Store the second input in a temp file.
cat f | tee /tmp/showlater
cat /tmp/showlater
rm /tmp/showlater
Update:
As shown in the comments (#j.a.) the solution above will need to be adjusted into the OP's real needs. Calling will be easier in a function and what do you want to do with errors in your initial commands and in the tee/cat/rm ?
I recommend tee /dev/stdout.
cat f | tee /dev/stdout
One possible solution I found is the following awk command:
awk '{d[NR] = $0} END {for (i=1;i<=NR;i++) print d[i]; for (i=1;i<=NR;i++) print d[i]}'
However, I feel there must be a more "canonical" way of doing this using.
a simple bash script ?
But this will store all the stdin, why not store the output to a file a read the file both if you need ?
full=""
while read line
do
echo "$line"
full="$full$line\n"
done
printf $full
The best way would be to store the output in a file and show it later on. Using tee has the advantage of showing the output as it comes:
if tmpfile=$(mktemp); then
commands | tee "$tmpfile"
cat "$tmpfile"
rm "$tmpfile"
else
echo "Error creating temporary file" >&2
exit 1
fi
If the amount of output is limited, you can do this:
output=$(commands); echo "$output$output"
So I have a Linux program that runs in a while(true) loop, which waits for user input, process it and print result to stdout.
I want to write a shell script that open this program, feed it lines from a txt file, one line at a time and save the program output for each line to a file.
So I want to know if there is any command for:
- open a program
- send text to a process
- receive output from that program
Many thanks.
It sounds like you want something like this:
cat file | while read line; do
answer=$(echo "$line" | prog)
done
This will run a new instance of prog for each line. The line will be the standard input of prog and the output will be put in the variable answer for your script to further process.
Some people object to the "cat file |" as this creates a process where you don't really need one. You can also use file redirection by putting it after the done:
while read line; do
answer=$(echo "$line" | prog)
done < file
Have you looked at pipes and redirections ? You can use pipes to feed input from one program into another. You can use redirection to send contents of files to programs, and/or write output to files.
I assume you want a script written in bash.
To open a file you just need to type a name of it.
To send a text to a program you either pass it through | or with < (take input from file)
To receive output you use > to redirect output to some file or >> to redirect as well but append the results instead of truncating the file
To achieve what you want in bash, you could write:
#/bin/bash
cat input_file | xargs -l1 -i{} your_program {} >> output_file
This calls your_program for each line from input_file and appends results to output_file