I want to write a script that gets in its argument a command and executes it while it's running. for example if the script called ex_script, writing
ex_script "cat file1.txt | wc -l"
and the ex_script is:
var=`"${1}"`
echo $var
will assign the number of lines in file1.txt in var and then print it.
But it gives me
./ex_script: line 3: cat file1.txt | wc -l: command not found
How do I write this correctly?
Use eval
var=$(eval "$1")
echo "$var"
Related
I want to use tail in my custom pipe command.
For example, I want to execute this command:
>ls -1 | tail -n 1 | awk '{print "last file is "$1}'
>last file is test.txt
And I want to make it short by making my own custom script. It looks like this:
>ls -1 | myscript
>last file is test.txt
I know myscript can get input from "ls -1" by this code:
while read line; do
echo last file is $line
done
But I don't know how to use "tail -n 1" in the custom pipe command code above.
Is there a way to use a pipe command in another pipe command script?
Or do I have to implement the code which does the same process as "tail -n 1" myself?
I hope bash has some solution for this.
Try putting just this in myscript
tail -n 1 | awk '{print "last file is "$1}'
This works as the first command (tail) consumes the stdin of your script. In general, scripts work as though you typed their contest as-is to the terminal.
Is there a bash command like cat except that it takes the filename from stdin rather than the first argument? For example:
echo "/home/root/file.txt" | somecommand
would have the same output as cat /home/root/file.txt
Found the answer:
xargs cat
For example:
echo "/home/root/file.txt" | xargs cat
If the filename that you're expecting doesn't have a space:
#!/bin/bash
read filename
cat $filename
The read builtin shell command reads tokens from stdin into environment variables.
I get stuck by this problem:
I wrote a shell script and it gets a large file with many lines from stdin, that's how it is executed:
./script < filename
I want use the file as an input to another operation in the script, however I don't know how to store this file's name in a variable.
It is a script that takes a file from stdin as argument and then do awk operation in this file it self. Say if I write in script:
script:
#!/bin/sh
...
read file
...
awk '...' < "$file"
...
it only reads first line of the input file.
And I find a way to write like this:
Min=-1
while read line; do
n=$(echo $line | awk -F$delim '{print NF}')
if [ $Min -eq -1 ] || [ $n -lt $Min ];then
Min=$n
fi
done
it would take very very long time to wait for processing, it seems awk takes much time.
So how to improve this?
/dev/stdin can be quite useful here.
In fact, it's just a chain of links to your input.
So, writing cat /dev/stdin will give you all input from your file and you can deny using input filename at all.
Now answer to question :) Recursively read links, beginning at /dev/stdin, and you will get filename. Bash code:
r(){
l=`readlink $1`
if [ $? -ne 0 ]
then
echo $1
else
r $l
fi
}
filename=`r /dev/stdin`
echo $filename
UPD:
in Ubuntu I found an option -f to readlink. i.e. readlink -f /dev/stdin gives the same output. This option may absent in some systems.
UPD2:tests (test.sh is code above):
$ ./test.sh <input # that is a file
/home/sfedorov/input
$ ./test.sh <<EOF
> line
> EOF
/tmp/sh-thd-214216298213
$ echo 1 | ./test.sh
pipe:[91219]
$ readlink -f /dev/stdin < input
/home/sfedorov/input
$ readlink -f /dev/stdin << EOF
> line
> EOF
/tmp/sh-thd-3423766239895 (deleted)
$ echo 1 | readlink -f /dev/stdin
/proc/18489/fd/pipe:[92382]
You're overdoing this. The way you invoke your script:
the file contents are the script's standard input
the script receives no argument
But awk already takes input from stdin by default, so all you need to do to make this work is:
not give awk any file name argument, it's going to be the wrapping shell's stdin automatically
not consume any of that input before the wrapping script reaches the awk part. Specifically: no read
If that's all there is to your script, it reduces to the awk invocation, so you might consider doing away with it altogether and just call awk directly. Or make your script directly an awk one instead of a sh one.
Aside: the reason your while read line/multiple awk variant (the one in the question) is slow is because it spawns an awk process for each and every line of the input, and process spawning is order of magnitudes slower than awk processing a single line. The reason why the generate tmpfile/single awk variant (the one in your answer) is still a bit slow is because it's generating the tmpfile line by line, reopening to append every time.
Modify your script to that it takes the input file name as an argument, then read from the file in your script:
$ ./script filename
In script:
filename=$1
awk '...' < "$filename"
If your script just reads from standard input, there is no guarantee that there is a named file providing the input; it could just as easily be reading from a pipe or a network socket.
How about invoking the script differently pipe standard output of YourFilename into
your scriptName as follows (the standard output of the cat filename now becomes standard
input to you script, actually in this case to the awk command
For I have filename Names.data and script showNames.sh execute as follows
cat Names.data | ./showNames.sh
Contents of filename Names.data
Huckleberry Finn
Jack Spratt
Humpty Dumpty
Contents of scrip;t showNames.sh
#!/bin/bash
#whatever awk commands you need
awk "{ print }"
Well I finally find this way to solve my problem, although it will take several seconds.
grep '.*' >> /tmp/tmpfile
Min=$(awk -F$delim 'NF < min || min == "" { min = NF };END {printmin}'</tmp/tmpfile)
Just append each line into a temporary file so that after reading from stdin, the tmpfile is the same as input file.
I am trying to pipe the output of a tail command into another bash script to process:
tail -n +1 -f your_log_file | myscript.sh
However, when I run it, the $1 parameter (inside the myscript.sh) never gets reached. What am I missing? How do I pipe the output to be the input parameter of the script?
PS - I want tail to run forever and continue piping each individual line into the script.
Edit
For now the entire contents of myscripts.sh are:
echo $1;
Generally, here is one way to handle standard input to a script:
#!/bin/bash
while read line; do
echo $line
done
That is a very rough bash equivalent to cat. It does demonstrate a key fact: each command inside the script inherits its standard input from the shell, so you don't really need to do anything special to get access to the data coming in. read takes its input from the shell, which (in your case) is getting its input from the tail process connected to it via the pipe.
As another example, consider this script; we'll call it 'mygrep.sh'.
#!/bin/bash
grep "$1"
Now the pipeline
some-text-producing-command | ./mygrep.sh bob
behaves identically to
some-text-producing-command | grep bob
$1 is set if you call your script like this:
./myscript.sh foo
Then $1 has the value "foo".
The positional parameters and standard input are separate; you could do this
tail -n +1 -f your_log_file | myscript.sh foo
Now standard input is still coming from the tail process, and $1 is still set to 'foo'.
Perhaps your were confused with awk?
tail -n +1 -f your_log_file | awk '{
print $1
}'
would print the first column from the output of the tail command.
In the shell, a similar effect can be achieved with:
tail -n +1 -f your_log_file | while read first junk; do
echo "$first"
done
Alternatively, you could put the whole while ... done loop inside myscript.sh
Piping connects the output (stdout) of one process to the input (stdin) of another process. stdin is not the same thing as the arguments sent to a process when it starts.
What you want to do is convert the lines in the output of your first process into arguments for the the second process. This is exactly what the xargs command is for.
All you need to do is pipe an xargs in between the initial command and it will work:
tail -n +1 -f your_log_file | xargs | myscript.sh
I'm trying to get this script to basically read input from a file on a command line, match the user id in the file using grep and output these lines with line numbers starting from 1)...n in a new file.
so far my script looks like this
#!/bin/bash
linenum=1
grep $USER $1 |
while [ read LINE ]
do
echo $linenum ")" $LINE >> usrout
$linenum+=1
done
when i run it ./username file
i get
line 4: [: read: unary operator expected
could anyone explain the problem to me?
thanks
Just remove the [] around read line - they should be used to perform tests (file exists, string is empty etc.).
How about the following?
$ grep $USER file | cat -n >usrout
Leave off the square brackets.
while read line; do
echo $linenum ")" $LINE
done >> usrout
just use awk
awk -vu="$USER" '$0~u{print ++d") "$0}' file
or
grep $USER file |nl
or with the shell, (no need to use grep)
i=1
while read -r line
do
case "$line" in
*"$USER"*) echo $((i++)) $line >> newfile;;
esac
done <"file"
Why not just use grep with the -n (or --line-number) switch?
$ grep -n ${USERNAME} ${FILE}
The -n switch gives the line number that the match was found on in the file. From grep's man page:
-n, --line-number
Prefix each line of output with the 1-based line number
within its input file.
So, running this against the /etc/passwd file in linux for user test_user, gives:
31:test_user:x:5000:5000:Test User,,,:/home/test_user:/bin/bash
This shows that the test_user account appears on line 31 of the /etc/passwd file.
Also, instead of $foo+=1, you should write foo=$(($foo+1)).