Bash: How to pass input file content as command argument - bash

I have a command command that takes an input file as argument. Is there a way to call command without actually creating a file?
I would like to achieve the following behavior
$ echo "content" > tempfile
$ command tempfile
$ rm tempfile
if possible:
as a one-liner,
without creating a file,
using either a bash (or sh) feature or a "well-known" command (as standard as xargs)
It feels like there must be an easy way to do it but I can't find it.

Just use a process substitution.
command <(echo "content")
Bash will create a FIFO or other type of temporary file in /dev for the standard output of whatever happens in the process. For example:
$ echo <(echo hi)
/dev/fd/63

Related

How to automate read command in shell script?

I am trying to build shell script. One of the commands used in this script is supposedly using read command demanding param to complete its execution. Now i want to pass same argument everytime for this. Can i automate this ?
In short, how to automate read command by shell script ?
Because of some reasons i can not share actual script.
If read is reading from standard input, you can just redirect from a file containing the necessary data:
$ cat foo.txt
a
b
$ someScript.sh < foo.txt
or pipe the data from another command:
$ printf 'a\nb\n' | someScript.sh

Process substitution, /dev/fd/63

I have a script that takes a file name as input in $1, and processes it...and creates an output file as ${1}.output.log and it works fine. e.g. if i tried
./myscript filename.txt
It'll process and generate output file with name: filename.txt.output.log
But When I tried to substitute a process to give input to this script like
./myscript <(echo something), it failed as it cannot create a file anymore with ${1}. output.log ; because now $1 is not an actual file and doesn't exist in my working directory where script is suppose to create an output.
Any suggestions to work around this problem?
The problem is probably that when using process substitution you are trying to create a file in /dev, more specifically, /dev/fd/63.output.log
I recommend doing this:
output_file="$( sed 's|/dev/fd/|./process_substitution-|' <<< ${1} ).output.log"
echo "my output" >> "$output_file"
We use sed to replace /dev/fd/ to ./process_substitution- so the file gets created in the current working directory (pwd) with the name process_substitution-63.output.log

Reading full file from Standard Input and Supplying it to a command in ksh

I am trying to read contents of a file given from standard input into a script. Any ideas how to do that?
Basically what I want is:
someScript.ksh < textFile.txt
Inside the ksh, I am using a binary which will read data from "textFile.txt" if the file is given on the standard input.
Any ideas how do I "pass" the contents of the given input file, if any, to another binary inside the script?
You haven't really given us enough information to answer the question, but here are a few ideas.
If you have a script that you want to accept data on stdin, and that script calls something else that expects data to be passed in as a filename on the command line, you can take stdin and dump it to a temporary file. Something like:
#!/bin/sh
tmpfile=$(mktemp tmpXXXXXX)
cat > $tmpfile
/some/other/command $tmpfile
rm -f $tmpfile
(In practice, you would probably use trap to clean up the temporary file on exit).
If instead the script is calling another command that also expects input on stdin, you don't really have to do anything special. Inside your script, stdin of anything you call will be connected to stdin of the calling script, and as long as you haven't previously consumed the input you should be all set.
E.g., given a script like this:
#!/bin/sh
sed s/hello/goodbye/
I can run:
echo hello world | sh myscript.sh
And get:
goodbye world

Bash - run command using lines from text file

I am a bit new to bash scripting and have not been able to find an answer for what I am about to ask, that is if it is possible.
I have a text file which is created by search a directory using grep for files containing "Name" and outputs the below, say the file is called PathOutput.txt
/mnt/volnfs3rvdata/4007dc45-477a-45b2-9c28-43bc5bbb4f9f/master/vms/4c3483af-b41a-4979-98b7-6f6a4f147670/4c3483af-b41a-4979-98b7-6f6a4f147670.ovf
/mnt/volnfs3rvdata/4007dc45-477a-45b2-9c28-43bc5bbb4f9f/master/vms/5b5538a5-423f-4eaf-9678-d377a6706c58/5b5538a5-423f-4eaf-9678-d377a6706c58.ovf
/mnt/volnfs3rvdata/4007dc45-477a-45b2-9c28-43bc5bbb4f9f/master/vms/0e2d1451-45cc-456e-846d-d174515a60dd/0e2d1451-45cc-456e-846d-d174515a60dd.ovf
/mnt/volnfs3rvdata/4007dc45-477a-45b2-9c28-43bc5bbb4f9f/master/vms/daaf622e-e035-4c1b-a6d7-8ee209c4ded6/daaf622e-e035-4c1b-a6d7-8ee209c4ded6.ovf
/mnt/volnfs3rvdata/4007dc45-477a-45b2-9c28-43bc5bbb4f9f/master/vms/48f52ab9-64df-4b1e-9c35-c024ae2a64c4/48f52ab9-64df-4b1e-9c35-c024ae2a64c4.ovf
Now what I would like to do if possible is loop through the file with a command, using a variable to bring in each line in the text file. But I cannot work out a way to run the command againist each line. With all my playing around I did get a results where it would run once against the first line, but this was when the output of grep was piped into another command.
At the moment in a bash script I am just extracting the paths to PathOutput.txt, cat to display the paths, then copy the path I want to a read -p command to create a variable to run against a command. It works fine now, just have to run the script each time for each path. If I could get the command to loop through each line I could output the results to a txt file.
Is it possible?
You could use xargs:
$ xargs -n1 echo "arg:" < file
arg: /mnt/volnfs3rvdata/4007dc45-477a-45b2-9c28-43bc5bbb4f9f/master/vms/4c3483af-b41a-4979-98b7-6f6a4f147670/4c3483af-b41a-4979-98b7-6f6a4f147670.ovf
arg: /mnt/volnfs3rvdata/4007dc45-477a-45b2-9c28-43bc5bbb4f9f/master/vms/5b5538a5-423f-4eaf-9678-d377a6706c58/5b5538a5-423f-4eaf-9678-d377a6706c58.ovf
arg: /mnt/volnfs3rvdata/4007dc45-477a-45b2-9c28-43bc5bbb4f9f/master/vms/0e2d1451-45cc-456e-846d-d174515a60dd/0e2d1451-45cc-456e-846d-d174515a60dd.ovf
arg: /mnt/volnfs3rvdata/4007dc45-477a-45b2-9c28-43bc5bbb4f9f/master/vms/daaf622e-e035-4c1b-a6d7-8ee209c4ded6/daaf622e-e035-4c1b-a6d7-8ee209c4ded6.ovf
arg: /mnt/volnfs3rvdata/4007dc45-477a-45b2-9c28-43bc5bbb4f9f/master/vms/48f52ab9-64df-4b1e-9c35-c024ae2a64c4/48f52ab9-64df-4b1e-9c35-c024ae2a64c4.ovf
Just replace echo "arg:" with the command you actually want to use. If you want to passed all the files at once drop the -n1 option.
If I understand correctly, you may want something like this:
for L in `cat PathOutput.txt`; do
echo "I read line $L from PathOutput.txt"
# do something useful with $L
done

Bash process substitution: what does `echo >(ls)` do?

Here is an example of Bash's process substitution:
zjhui#ubuntu:~/Desktop$ echo >(ls)
/dev/fd/63
zjhui#ubuntu:~/Desktop$ abs-guide.pdf
Then I get a cursor waiting for a command.
/dev/fd/63 doesn't exist. I think what happens is:
Output the filename used in /dev/fd
Execute the ls in >(ls)
Is this right? Why is there a cursor waiting for input?
When you execute echo >(ls), bash replaces this with echo /dev/fd/63 and runs ls with /dev/fd/63 connected to its standard input. echo does not use its arguments as file names, and ls does not use standard input. Bash will write your standard prompt but the output of ls comes after it. You can type in any Bash command there, you just lost the visual cue from the prompt, which is further up the screen.
echo >(ls) is not something that is likely to ever be useful.

Resources