Process substitution equivalent in ksh (to modify stdout) - bash

I have this bash script to prepend timestamp to each entry of a log file but I am stuck at converting it into korn shell syntax. I read korn shell doesn't like subshell. I tried to utilize function but that didn't work. In particular, I tried to convert the entire exec line into korn syntax. Could someone please take a look and help me?
#!/usr/bin/bash
exec > >(
while read line ; do
echo "$(date '+%Y%m%d %H:%M:%S') ${line}"
done > n.log
) 2>&1
echo 'first line; should have an initial timestamp'
sleep 2
echo 'printed two seconds later, should have a timestamp with a comparable offset'

The portable thing -- which will work in any POSIX-compliant shell -- is to use a named FIFO.
Here, a ksh extension (printf %()T) also available in recent bash is used to avoid needing to launch date inside the subshell.
mkfifo log.fifo
(while IFS= read -r line; do
printf '%(%Y%m%d %H:%M:%S)T '
printf '%s\n' "$line"
done >n.log <log.fifo) &
exec >log.fifo

A little expansion on Charles' excellent answer: Make it more explicit
by creating separate functions for the output modification and for the main functional code; without named pipes, there's nothing to clean up after you're done.
print_with_timestamps() {
(while IFS= read -r line; do
printf '%(%Y%m%d %H:%M:%S)T '
printf '%s\n' "$line"
done
) > n.log 2>&1
}
main() {
echo 'first line; should have an initial timestamp'
sleep 2
echo 'printed two seconds later, should have a timestamp with a comparable offset'
}
main | print_with_timestamps

Related

Pass shell parameters as stdin to invoked program

I'm trying to run a program with each of my script's arguments passed as a different line on stdin. My current attempt involves using a for loop, as follows:
#!/bin/bash
[My Program] <<EOF
for i in "$#"
do
$i
done
EOF
This doesn't work -- the program actually receives for i in as part of its input, instead of being given the list of arguments themselves. How would I change it to function?
To feed your program's stdin a newline-separated list of the command-line arguments with which your script was called:
#!/usr/bin/env bash
./your-program <<<"$(printf '%s\n' "$#")"
...or, with POSIX sh-compatible heredoc syntax:
#!/bin/sh
./your-program <<EOF
$(printf '%s\n' "$#")
EOF
If for some reason you really want to use a for loop, you can do that:
#!/bin/sh
./your-program <<EOF
$(for i; do
echo "$i"
done)
EOF
...though note that printf would be a preferable replacement to echo even here; to understand why, see the APPLICATION USAGE section of the POSIX spec for echo.

Shell POSIX two nested while read and read from stdin not working

I have that sample script:
#!/bin/sh
while read ll </dev/fd/4; do
echo "1 "$ll
while read line; do
echo $line
read input </dev/fd/3
echo "$input"
done 3<&0 <notify-finished
done 4<output_file
Currently The first loop do not iterate just stays on line 1. How do I fix that without bashisms because it has to be highly portable. Thanks.
Your code already has bashisms. Here, I'm taking them out (and simplifying the FD handling for better readability):
#!/bin/sh
while read ll <&4; do # read from output_file
printf '%s\n' "1 $ll"
while read line <&3; do # read from notify-finished
printf '%s\n' "$line"
read input # read from stdin
printf '%s\n' "$input"
done 3<notify-finished
done 4<output_file
Run the script as follows:
echo "output_file" >output_file
echo "notify-finished" >notify-finished
echo "stdout" | ./yourscript
...and it correctly exits with the following output:
1 output_file
notify-finished
stdout
Notes:
echo's behavior is wildly nonportable across POSIX platforms. See the APPLICATION USAGE section of the POSIX spec for echo, which advises using printf instead.
/dev/fd/## is not specified by POSIX; it is an extension made available both by Linux distributions (creating a symlink to /proc/self/fd -- /proc being itself an unspecified extension) and by bash itself. Use <&4 in place of </dev/fd/4.
You probably want to use the -r argument to read -- which is POSIX-specified, and prevents the default behavior of treating backslashes as escape sequences for newlines and characters in IFS. Without it, foo\bar is read as foobar, thus not reading your data as it truly exists in its input sources.

How to redirect stdin to file in bash

Consider this very simple bash script:
#!/bin/bash
cat > /tmp/file
It redirects whatever you pipe into it to a file. e.g.
echo "hello" | script.sh
and "hello" will be in the file /tmp/file. This works... but it seems like there should be a native bash way of doing this without using "cat". But I can't figure it out.
NOTE:
It must be in a script. I want the script to operate on the file contents afterwards.
It must be in a file, the steps afterward in my case involve a tool that only reads from a file.
I already have a pretty good way of doing this - its just that it seems like a hack. Is there a native way? Like "/tmp/file < 0 " or "0> /tmp/file". I thought bash would have a native syntax to do this...
You could simply do
cp /dev/stdin myfile.txt
Terminate your input with Ctrl+D or Ctrl+Z and, viola! You have your file created with text from the stdin.
echo "$(</dev/stdin)" > /tmp/file
terminate your input with ENTERctrl+d
I don't think there is a builtin that reads from stdin until EOF, but you can do this:
#!/bin/bash
exec > /tmp/file
while IFS= read -r line; do
printf '%s\n' "$line"
done
Another way of doing it using pure BASH:
#!/bin/bash
IFS= read -t 0.01 -r -d '' indata
[[ -n $indata ]] && printf "%s" "$indata" >/tmp/file
IFS= and -d '' causes all of stdin data to be read into a variable indata.
Reason of using -t 0.01: When this script is called with no input pipe then read will timeout after negligible 0.01 seconds delay. If there is any data available in input it will be read in indata variable and it will be redirected to >/tmp/file.
Another option: dd of=/tmp/myfile/txt
Note: This is not a built-in, however, it might help other people looking for a simple solution.
Why don't you just
GENERATE INPUT | (
# do whatever you like to the input here
)
But sometimes, especially when you want to complete the input first, then operate on the modified output, you should still use temporary files:
TMPFILE="/tmp/fileA-$$"
GENERATE INPUT | (
# modify input
) > "$TMPFILE"
(
# do something with the input from TMPFILE
) < "$TMPFILE"
rm "$TMPFILE"
If you don't want the program to end after reaching EOF, this might be helpful.
#!/bin/bash
exec < <(tail -F /tmp/a)
cat -

Processing arguments from file/cli/stdin in bash

I can see myself ending up writing a lot of scripts which do some thing based on some arguments on the command line.
This then progresses to doing more or less the same thing multiple times automated with a scheduler.
To prevent myself having to create a new job for each variation on the arguments, I would like to create a simple script skeleton which I can use to quickly create scripts which take the same arguments from:
The command line
A file from a path specified on the command line
From stdin until eof
My initial approach for taking arguments or config from a TAB delim file was as follows:
if [ -f "$1" ]; then
echo "Using config file '$1'"
IFS=' '
cat $1 | grep -v "^#" | while read line; do
if [ "$line" != "" ]; then
echo $line
#call fn with line as args
fi
done
unset IFS
elif [ -d "$1" ]; then
echo "Using cli arguments..."
#call fn with $1 $2 $3 etc...
else
echo "Read from stdin, ^d will terminate"
IFS=' '
while read line; do
if [ "$(echo $line | grep -v "^#")" != "" ]; then
#call fn with line as args
fi
done
unset IFS
fi
So to all those who have doubtless done this kind of thing before:
How did/would you go about it?
Am I being too procedural - could this be better done with awk or similar?
Is this the best approach anyway?
Not sure whether I'm a bit wide of the mark, but it sounds like you are trying to reinvent xargs.
If you have a script, normally invoked as such
$ your_script.sh -d foo bar baz
You can get the parameters from stdin as follows:
$ xargs your_script.sh
-d foo
bar
baz
^D
Our from a file
$ cat config_file | xargs your_script.sh
(assuming that config_file has the following content)
-d foo bar
baz
Or from multiple config files
$ cat config_file1 config_file2 | xargs your_script.sh
Can you think of a standard Unix utility that behaves as you describe? (No, I can't.) That suggests that you are slightly off-target with your goal.
The testing of -f "$1" and -d "$1" is not conventional, but if your script only works on directories, maybe it makes sense.
Ultimately, I think you need an interface like:
your_cmd [-f argumentlist] [file ...]
The explicit but optional -f argumentlist allows you to specify the file to read from on the command line. Otherwise, the files specified on the command line are processed, unless there are no such arguments, in which case the file names to be processed are read from standard input. This is a lot closer to a conventional organization. We can debate about the handling of file names with spaces and newlines in the names some other time.
The core of your code will be written to accept/process one file name at a time. This might be written as a shell function, which allows the maximum reuse.
while getopts f: opt
do
case $opt in
(f) while read file; do shell_function $file; done < $OPTARG; exit 0;;
(*) : Error handling etc;;
esac
done
shift $(($OPTIND - 1))
case $# in
(0) while read file; do shell_function $file; done; exit 0;;
(*) for file in "$#"; do shell_function $file; done; exit 0;;
esac
It is not very hard to ring the variations on this. It is also tolerably compact.

Modifying data using awk

In a long file i'm searching for something like this:
c 0.5p_f
10 px 2
I need to modify a 3rd column of a line after 'c 0.5p_f' marker.
It's part of a bash script that would do this and i would like to avoid using, like, awk scripts, only bash commands.
Why not use awk? It's perfect.
do_modify{$3="modify";do_modify=0}/c 0\.5p_f/{do_modify=1}1
If you can use sed scripts,
/c 0\.5p_f/{n;s/\([^[:space:]]*[[:space:]]\+[^[:space:]]*[[:space:]]\+\)\S*/\1modify/}
would do. Not that pure Bash is hard either, though.
do_modify=
while read -r line; do
if [[ -n ${do_modify} ]]; then
columns=(${line})
columns[2]=modified
line=${columns[*]}
do_modify=
fi
printf '%s\n' "${line}"
if [[ ${line} = *'c 0.5p_f'* ]]; then
do_modify=1
fi
done

Resources