I'm capturing stdout (log) in a file using tail -f file_name to save a specific string with grep and sed (to exit the tail) :
tail -f log.txt | sed /'INFO'/q | grep 'INFO' > info_file.txt
This works fine, but I want to terminate the command in case it does not find the pattern (INFO) in the log file after some time
I want something like this (which does not work) to exit the script after a timeout (60sec):
tail -f log.txt | sed /'INFO'/q | grep 'INFO' | read -t 60
Any suggestions?
This seems to work for me...
read -t 60 < <(tail -f log.txt | sed /'INFO'/q | grep 'INFO')
Since you only want to capture one line:
#!/bin/bash
IFS= read -r -t 60 line < <(tail -f log.txt | awk '/INFO/ { print; exit; }')
printf '%s\n' "$line" >info_file.txt
For a more general case, where you want to capture more than one line, the following uses no external commands other than tail:
#!/usr/bin/env bash
end_time=$(( SECONDS + 60 ))
while (( SECONDS < end_time )); do
IFS= read -t 1 -r line && [[ $line = *INFO* ]] && printf '%s\n' "$line"
done < <(tail -f log.txt)
A few notes:
SECONDS is a built-in variable in bash which, when read, will retrieve the time in seconds since the shell was started. (It loses this behavior after being the target of any assignment -- avoiding such mishaps is part of why POSIX variable-naming conventions reserving names with lowercase characters for application use are valuable).
(( )) creates an arithmetic context; all content within is treated as integer math.
<( ) is a process substitution; it evaluates to the name of a file-like object (named pipe, /dev/fd reference, or similar) which, when read from, will contain output from the command contained therein. See BashFAQ #24 for a discussion of why this is more suitable than piping to read.
The timeout command, (part of the Debian/Ubuntu "coreutils" package), seems suitable:
timeout 1m tail -f log.txt | grep 'INFO'
Related
I have made two programms and I'm trying to call the one from the other but this is appearing on my screen:
cp: cannot stat ‘PerShip/.csv’: No such file or directory
cp: target ‘tmpship.csv’ is not a directory
I don't know what to do. Here are the programms. Could somebody help me please?
#!/bin/bash
shipname=$1
imo=$(grep "$shipname" shipsNAME-IMO.txt | cut -d "," -f 2)
cp PerShip/$imo'.csv' tmpship.csv
dist=$(octave -q ShipDistance.m 2>/dev/null)
grep "$shipname" shipsNAME-IMO.txt | cut -d "," -f 2 > IMO.txt
idnumber=$(cut -b 4-10 IMO.txt)
echo $idnumber,$dist
#!/bin/bash
rm -f shipsdist.csv
for ship in $(cat shipsNAME-IMO.txt | cut -d "," -f 1)
do
./FindShipDistance "$ship" >> shipsdist.csv
done
cat shipsdist.csv | sort | head -n 1
The code and error messages presented suggest that the second script is calling the first with an empty command-line argument. That would certainly happen if input file shipsNAME-IMO.txt contained any empty lines or otherwise any lines with an empty first field. An empty line at the beginning or end would do it.
I suggest
using the read command to read the data, and manipulating IFS to parse out comma-delimited fields
validating your inputs and other data early and often
making your scripts behave more pleasantly in the event of predictable failures
More generally, using internal Bash features instead of external programs where the former are reasonably natural.
For example:
#!/bin/bash
# Validate one command-line argument
[[ -n "$1" ]] || { echo empty ship name 1>&2; exit 1; }
# Read and validate an IMO corresponding to the argument
IFS=, read -r dummy imo tail < <(grep -F -- "$1" shipsNAME-IMO.txt)
[[ -f PerShip/"${imo}.csv" ]] || { echo no data for "'$imo'" 1>&2; exit 1; }
# Perform the distance calculation and output the result
cp PerShip/"${imo}.csv" tmpship.csv
dist=$(octave -q ShipDistance.m 2>/dev/null) ||
{ echo "failed to compute ship distance for '${imo}'" 2>&1; exit 1; }
echo "${imo:3:7},${dist}"
and
#!/bin/bash
# Note: the original shipsdist.csv will be clobbered
while IFS=, read -r ship tail; do
# Ignore any empty ship name, however it might arise
[[ -n "$ship" ]] && ./FindShipDistance "$ship"
done < shipsNAME-IMO.txt |
tee shipsdist.csv |
sort |
head -n 1
Note that making the while loop in the second script part of a pipeline will cause it to run in a subshell. That is sometimes a gotcha, but it won't cause any problem in this case.
I'm using a bash script to parse information from a PDF and use it to rename the file (with the help of pdfgrep). However, after some working, I'm receiving a "Bad Substitution" error with line 5. Any ideas on how to reformat it?
shopt -s nullglob nocaseglob
for f in *.pdf; do
id1=$(pdfgrep -i "ID #: " "$f" | grep -oE "[M][0-9][0-9]+")
id2=$(pdfgrep -i "Second ID: " "$f" | grep -oE "[V][0-9][0-9]+")
$({ read dobmonth; read dobday; read dobyear; } < (pdfgrep -i "Date Of Birth: " "$f" | grep -oE "[0-9]+"))
# Check id1 is found, else do nothing
if [ ${#id1} ]; then
mv "$f" "${id1}_${id2}_${printf '%02d-%02d-%04d\n' "$dobmonth" "$dobday" "$dobyear"}.pdf"
fi
done
There are several unrelated bugs in this code; a corrected version might look like the following:
#!/usr/bin/env bash
shopt -s nullglob nocaseglob
for f in *.pdf; do
id1=$(pdfgrep -i "ID #: " "$f" | grep -oE "[M][0-9][0-9]+") || continue
id2=$(pdfgrep -i "Second ID: " "$f" | grep -oE "[V][0-9][0-9]+") || continue
{ read dobmonth; read dobday; read dobyear; } < <(pdfgrep -i "Date Of Birth: " "$f" | grep -oE "[0-9]+")
printf -v date '%02d-%02d-%04d' "$dobmonth" "$dobday" "$dobyear"
mv -- "$f" "${id1}_${id2}_${date}.pdf"
done
< (...) isn't meaningful bash syntax. If you want to redirect from a process substitution, you should use the redirection syntax < and the process substitution <(...) separately.
$(...) generates a subshell -- a separate process with its own memory, such that variables assigned in that subprocess aren't exposed to the larger shell as a whole. Consequently, if you want the contents you set with read to be visible, you can't have them be in a subshell.
${printf ...} isn't meaningful syntax. Perhaps you wanted a command substitution? That would be $(printf ...), not ${printf ...}. However, it's more efficient to use printf -v varname 'fmt' ..., which avoids the overhead of forking off a subshell altogether.
Because we put the || continues on the id1=$(... | grep ...) command, we no longer need to test whether id1 is nonempty: The continue will trigger and cause the shell to continue to the next file should the grep fail.
Do what Charles suggests wrt creating the new file name but you might consider a different approach to parsing the PDF file to reduce how many pdfregs and pipes and greps you're doing on each file. I don't have pdfgrep on my system, nor do I know what your input file looks like but if we use this input file:
$ cat file
foo
ID #: M13
foo
Date Of Birth: 05 21 1996
foo
Second ID: V27
foo
and grep -E in place of pdfgrep then here's how I'd get the info from the input file by just reading it once with pdfgrep and parsing that output with awk instead of reading it multiple times with pdfgrep and using multiple pipes and greps to extract the info you need:
$ grep -E -i '(ID #|Second ID|Date Of Birth): ' file |
awk -F': +' '{f[$1]=$2} END{print f["ID #"], f["Second ID"], f["Date Of Birth"]}'
M13 V27 05 21 1996
Given that you can use the same read approach to save the output in variables (or an array). You obviously may need to massage the awk command depending on what your pdfgrep output actually looks like.
I have file with a few lines:
x 1
y 2
z 3 t
I need to pass each line as paramater to some program:
$ program "x 1" "y 2" "z 3 t"
I know how to do it with two commands:
$ readarray -t a < file
$ program "${a[#]}"
How can i do it with one command? Something like that:
$ program ??? file ???
The (default) options of your readarray command indicate that your file items are separated by newlines.
So in order to achieve what you want in one command, you can take advantage of the special IFS variable to use word splitting w.r.t. newlines (see e.g. this doc) and call your program with a non-quoted command substitution:
IFS=$'\n'; program $(cat file)
As suggested by #CharlesDuffy:
you may want to disable globbing by running beforehand set -f, and if you want to keep these modifications local, you can enclose the whole in a subshell:
( set -f; IFS=$'\n'; program $(cat file) )
to avoid the performance penalty of the parens and of the /bin/cat process, you can write instead:
( set -f; IFS=$'\n'; exec program $(<file) )
where $(<file) is a Bash equivalent to to $(cat file) (faster as it doesn't require forking /bin/cat), and exec consumes the subshell created by the parens.
However, note that the exec trick won't work and should be removed if program is not a real program in the PATH (that is, you'll get exec: program: not found if program is just a function defined in your script).
Passing a set of params should be more organized :
In this example case I'm looking for a file containing chk_disk_issue=something etc.. so I set the values by reading a config file which I pass in as a param.
# -- read specific variables from the config file (if found) --
if [ -f "${file}" ] ;then
while IFS= read -r line ;do
if ! [[ $line = *"#"* ]]; then
var="$(echo $line | cut -d'=' -f1)"
case "$var" in
chk_disk_issue)
chk_disk_issue="$(echo $line | tr -d '[:space:]' | cut -d'=' -f2 | sed 's/[^0-9]*//g')"
;;
chk_mem_issue)
chk_mem_issue="$(echo $line | tr -d '[:space:]' | cut -d'=' -f2 | sed 's/[^0-9]*//g')"
;;
chk_cpu_issue)
chk_cpu_issue="$(echo $line | tr -d '[:space:]' | cut -d'=' -f2 | sed 's/[^0-9]*//g')"
;;
esac
fi
done < "${file}"
fi
if these are not params then find a way for your script to read them as data inside of the script and pass in the file name.
I have a long-running command which outputs periodically. to demonstrate let's assume it is:
function my_cmd()
{
for i in {1..9}; do
echo -n $i
for j in {1..$i}
echo -n " "
echo $i
sleep 1
done
}
the output will be:
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
I want to display the command output meanwhile save it to a file at the same time.
this can be done by my_cmd | tee -a res.txt.
Now I want to display the output to terminal as-is but save to file with a transformed flavor, say with sed "s/ //g".
so the res.txt becomes:
11
22
33
44
66
77
88
99
how can I do this transformation on-the-fly without waiting for command exits then read the file again?
Note that in your original code, {1..$i} is an error because sequences can't contain variables. I've replaced it with seq. Also, you're missing a do and a done for the inner for loop.
At any rate, I would use process substitution.
#!/usr/bin/env bash
function my_cmd {
for i in {1..9}; do
printf '%d' "$i"
for j in $(seq 1 $i); do
printf ' '
done
printf '%d\n' "$j"
sleep 1
done
}
my_cmd | tee >(tr -d ' ' >> res.txt)
Process substitution usually causes bash to create an entry in /dev/fd which is fed to the command in question. The contents of the substitution run asynchronously, so it doesn't block the process sending data to it.
Note that the process substitution isn't a REAL file, so the -a option for tee is meaningless. If you really want to append to your output file, >> within the substitution is the way to go.
If you don't like process substitution, another option would be to redirect to alternate file descriptors. For example, instead of the last line in the script above, you could use:
exec 5>&1
my_cmd | tee /dev/fd/5 | tr -d ' ' > res.txt
exec 5>&-
This creates a file descriptor, /dev/fd/5, which redirects to your real stdout, the terminal. It then tells tee to write to this, allowing the normal stdout from tee to be processed by additional pipe elements before final redirection to your log file.
The method you choose is up to you. I find process substitution clearer.
Something you need to modify in your function. And you may use tee in the for loop to print and write file at the same time. The following script may get the result you desire.
#!/bin/bash
filename="a.txt"
[ -f $filename ] && rm $filename
for i in {1..9}; do
echo -n $i | tee -a $filename
for((j=1;j<=$i;j++)); do
echo -n " "
done
echo $i | tee -a $filename
sleep 1
done
Instead of double loop, I would use printf and its formatting capability %Xs to pad with blank characters.
Moreover I would use double printing (for stdout and your file) rather than using pipe and starting new processes.
So your function could look like this:
function my_cmd() {
for i in {1..9}; do
printf "%s %${i}s\n" $i $i
printf "%s%s\n" $i $i >> res.txt
done
}
I have written the following filter as a function in my ~/.bash_profile:
hilite() {
export REGEX_SED=$(echo $1 | sed "s/[|()]/\\\&/g")
while read line
do
echo $line | egrep "$1" | sed "s/$REGEX_SED/\x1b[7m&\x1b[0m/g"
done
exit 0
}
to find lines of anything piped into it matching a regular expression, and highlight matches using ANSI escape codes on a VT100-compatible terminal.
For example, the following finds and highlights the strings bin, U or 1 which are whole words in the last 10 lines of /etc/passwd:
tail /etc/passwd | hilite "\b(bin|[U1])\b"
However, the script runs very slowly as each line forks an echo, egrep and sed.
In this case, it would be more efficient to do egrep on the entire input, and then run sed on its output.
How can I modify my function to do this? I would prefer to not create any temporary files if possible.
P.S. Is there another way to find and highlight lines in a similar way?
sed can do a bit of grepping itself: if you give it the -n flag (or #n instruction in a script) it won't echo any output unless asked. So
while read line
do
echo $line | egrep "$1" | sed "s/$REGEX_SED/\x1b[7m&\x1b[0m/g"
done
could be simplified to
sed -n "s/$REGEX_SED/\x1b[7m&\x1b[0m/gp"
EDIT:
Here's the whole function:
hilite() {
REGEX_SED=$(echo $1 | sed "s/[|()]/\\\&/g");
sed -n "s/$REGEX_SED/\x1b[7m&\x1b[0m/gp"
}
That's all there is to it - no while loop, reading, grepping, etc.
If your egrep supports --color, just put this in .bash_profile:
hilite() { command egrep --color=auto "$#"; }
(Personally, I would name the function egrep; hence the usage of command).
I think you can replace the whole while loop with simply
sed -n "s/$REGEX_SED/\x1b[7m&\x1b[0m/gp"
because sed can read from stdin line-by-line so you don't need read
I'm not sure if running egrep and piping to sed is faster than using sed alone, but you can always compare using time.
Edit: added -n and p to sed to print only highlighted lines.
Well, you could simply do this:
egrep "$1" $line | sed "s/$REGEX_SED/\x1b[7m&\x1b[0m/g"
But I'm not sure that it'll be that much faster ; )
Just for the record, this is a method using a temporary file:
hilite() {
export REGEX_SED=$(echo $1 | sed "s/[|()]/\\\&/g")
export FILE=$2
if [ -z "$FILE" ]
then
export FILE=~/tmp
echo -n > $FILE
while read line
do
echo $line >> $FILE
done
fi
egrep "$1" $FILE | sed "s/$REGEX_SED/\x1b[7m&\x1b[0m/g"
return $?
}
which also takes a file/pathname as the second argument, for case like
cat /etc/passwd | hilite "\b(bin|[U1])\b"