Alias for echo function with espeak executed in background - bash

I want to replace the normal echo function in ubuntu bash with a function that additionally uses espeak to say something everytime echo is used.
I came up with an alias for my .bashrc
alias ghostTalk='espeak -v +whisper -s 80 -p 100 "$(myFun)"& /bin/echo $1'
(in my final version I would replace ghostTalk with echo)
But this gives as output:
~$ ghostTalk 123
[2] 5685
123
[1] Done espeak -v +whisper -s 80 -p 100 "$(myFun)"
How can I avoid this and have the normal echo output e.g. only 123 while its talking in the background?

Backgrounding notifications can be suppressed with a double-fork:
ghostTalk() {
( espeak -v +whisper -s 80 -p 100 "$(myFun)" & )
builtin echo "$#"
}

Related

Looping over IP addresses from a file using bash array

I have a file in which I have given all the IP addresses. The file looks like following:
[asad.javed#tarts16 ~]#cat file.txt
10.171.0.201
10.171.0.202
10.171.0.203
10.171.0.204
10.171.0.205
10.171.0.206
10.171.0.207
10.171.0.208
I have been trying to loop over the IP addresses by doing the following:
launch_sipp () {
readarray -t sipps < file.txt
for i in "${!sipps[#]}";do
ip1=(${sipps[i]})
echo $ip1
sip=(${i[#]})
echo $sip
done
But when I try to access the array I get only the last IP address which is 10.171.0.208. This is how I am trying to access in the same function launch_sipp():
local sipp=$1
echo $sipp
Ip=(${ip1[*]})
echo $Ip
Currently I have IP addresses in the same script and I have other functions that are using those IPs:
launch_tarts () {
local tart=$1
local ip=${ip[tart]}
echo " ---- Launching Tart $1 ---- "
sshpass -p "tart123" ssh -Y -X -L 5900:$ip:5901 tarts#$ip <<EOF1
export DISPLAY=:1
gnome-terminal -e "bash -c \"pwd; cd /home/tarts; pwd; ./launch_tarts.sh exec bash\""
exit
EOF1
}
kill_tarts () {
local tart=$1
local ip=${ip[tart]}
echo " ---- Killing Tart $1 ---- "
sshpass -p "tart123" ssh -tt -o StrictHostKeyChecking=no tarts#$ip <<EOF1
. ./tartsenvironfile.8.1.1.0
nohup yes | kill_tarts mcgdrv &
nohup yes | kill_tarts server &
pkill -f traf
pkill -f terminal-server
exit
EOF1
}
ip[1]=10.171.0.10
ip[2]=10.171.0.11
ip[3]=10.171.0.12
ip[4]=10.171.0.13
ip[5]=10.171.0.14
case $1 in
kill) function=kill_tarts;;
launch) function=launch_tarts;;
*) exit 1;;
esac
shift
for ((tart=1; tart<=$1; tart++)); do
($function $tart) &
ips=(${ip[tart]})
tarts+=(${tart[#]})
done
wait
How can I use different list of IPs for a function created for different purpose from a file?
How about using GNU parallel? It's an incredibly powerful wonderful-to-know very popular free linux tool, easy to install.
Firstly, here's a basic parallel tool usage ex.:
$ parallel echo {} :::: list_of_ips.txt
# The four colons function as file input syntax.†
10.171.0.202
10.171.0.201
10.171.0.203
10.171.0.204
10.171.0.205
10.171.0.206
10.171.0.207
10.171.0.208
†(Specific to parallel; see parallel usage cheatsheet here]).
But you can replace echo with just about any as complex series of commands as you can imagine / calls to other scripts. parallel loops through the input it receives and performs (in parallel) the same operation on each input.
More specific to your question, you could replace echo simply with a command call to your script
Now you would no longer need to handle any looping through ip's itself, and instead be written designed for just a single IP input. parallel will handle running the program in parallel (you can custom set the number of concurrent jobs with option -j n for any int 'n')* .
*By default parallel sets the number of jobs to the number of vCPUs it automatically determines your machine has available.
$ parallel process_ip.sh :::: list_of_ips.txt
In pure Bash:
#!/bin/bash
while read ip; do
echo "$ip"
# ...
done < file.txt
Or in parallel:
#!/bin/bash
while read ip; do
(
sleep "0.$RANDOM" # random execution time
echo "$ip"
# ...
) &
done < file.txt
wait

Why ksh disables stderr when subshell is executed?

The program:
function import {
set -x
# read NAME < <(/usr/bin/pwd)
NAME=$(/usr/bin/pwd)
echo 123 >&2
set +x
}
echo aaaaaaaaaaa
import
echo bbbbbbbbbbb
OUT=$( import 2>&1 )
echo "$OUT"
echo ccccccccccc
I hoped to have the output between 'aaa' and 'bbb' to be the same as in between 'bbb' and 'ccc'. But it is not the case with ksh:
aaaaaaaaaaa
+ /usr/bin/pwd
+ NAME=/home/neuron
+ echo 123
+ 1>& 2
123
bbbbbbbbbbb
+ /usr/bin/pwd
ccccccccccc
If I change $( ... ) into < <(...), stderr works as usual and I have the same output. I tried that on solaris and linux and it behaves the same, so I guess it's not ksh bug. Please note that it's not just 'set -x' being disabled, also the 'echo 123 1>&2' output disapears. In bash the code works as I would suppose.
My questions are 'why' and 'how to capture the function's stdout and stderr?'
Thank you
Vlad
It's confirmed to be a bug. And we can reproduce it very simply:
Something appears broken with nested command substitution:
for command in true /bin/true; do
a=$( ( b=$( $command ); echo 123 >& 3; ) 3>& 1 ) &&
echo a=$a command=$command
done
a=123 command=true
a= command=/bin/true
We run the same assignment twice, once with a builtin, and once with
an external command. We would expect the same results, but this fails
when we include the external command.
Glenn Fowler:
I believe this was fixed between 2012-08-23 and 2012-10-12
Tested with the latest beta:
$ ${ksh} test.sh
Version AIJM 93v- 2014-01-14
a=123 command=true
a=123 command=/bin/true
$
$ ksh test.sh
Version JM 93u 2011-02-08
a=123 command=true
a= command=/bin/true
$
This looks like a bug in ksh (My version is u), and it is not specific to pwd or stderr.
Below I can reproduce the effect with true and fd 3. (This allows us to keep use the shell trace)
The effect appears to be triggered by assigning output from an external process
into a variable inside a function. If the assignment is then followed by output
to some other file descriptor, that output gets lost.
The idea here is that the $( ... ) construct somehow conflicts with the subsequent redirect,
but only when the code in ... is run externally. The true and pwd builtins don't trigger a subshell in ksh93.
I will ask David Korn for confirmation.
function f1 {
var=$( /bin/true )
echo 123 >& 3
}
function f2 {
var=$( true )
echo 123 >& 3
}
function f3 {
typeset var
var=$( /bin/true )
echo 123 >& 3
}
functions="f1 f2 f3"
typeset -tf ${functions}
exec 3>& 1
echo ${.sh.version}
for f in ${functions}; do
echo TEST $f
functions $f
echo "123 expected: "
$f
OUT=$( $f 3>& 1 )
echo "OUT='123' expected"
echo "OUT='$OUT'" Captured output
echo
done
output:
Version JM 93u 2011-02-08
TEST f1
function f1 {
var=$( /bin/true )
echo 123 >& 3
}
123 expected:
123
OUT='123' expected
OUT='' Captured output
TEST f2
function f2 {
var=$( true )
echo 123 >& 3
}
123 expected:
123
OUT='123' expected
OUT='123' Captured output
TEST f3
function f3 {
typeset var
var=$( /bin/true )
echo 123 >& 3
}
123 expected:
123
OUT='123' expected
OUT='' Captured output

OS X Bash, 'watch' command

I'm looking for the best way to duplicate the Linux 'watch' command on Mac OS X. I'd like to run a command every few seconds to pattern match on the contents of an output file using 'tail' and 'sed'.
What's my best option on a Mac, and can it be done without downloading software?
With Homebrew installed:
brew install watch
You can emulate the basic functionality with the shell loop:
while :; do clear; your_command; sleep 2; done
That will loop forever, clear the screen, run your command, and wait two seconds - the basic watch your_command implementation.
You can take this a step further and create a watch.sh script that can accept your_command and sleep_duration as parameters:
#!/bin/bash
# usage: watch.sh <your_command> <sleep_duration>
while :;
do
clear
date
$1
sleep $2
done
Use MacPorts:
$ sudo port install watch
The shells above will do the trick, and you could even convert them to an alias (you may need to wrap in a function to handle parameters):
alias myWatch='_() { while :; do clear; $2; sleep $1; done }; _'
Examples:
myWatch 1 ls ## Self-explanatory
myWatch 5 "ls -lF $HOME" ## Every 5 seconds, list out home directory; double-quotes around command to keep its arguments together
Alternately, Homebrew can install the watch from http://procps.sourceforge.net/:
brew install watch
It may be that "watch" is not what you want. You probably want to ask for help in solving your problem, not in implementing your solution! :)
If your real goal is to trigger actions based on what's seen from the tail command, then you can do that as part of the tail itself. Instead of running "periodically", which is what watch does, you can run your code on demand.
#!/bin/sh
tail -F /var/log/somelogfile | while read line; do
if echo "$line" | grep -q '[Ss]ome.regex'; then
# do your stuff
fi
done
Note that tail -F will continue to follow a log file even if it gets rotated by newsyslog or logrotate. You want to use this instead of the lower-case tail -f. Check man tail for details.
That said, if you really do want to run a command periodically, the other answers provided can be turned into a short shell script:
#!/bin/sh
if [ -z "$2" ]; then
echo "Usage: $0 SECONDS COMMAND" >&2
exit 1
fi
SECONDS=$1
shift 1
while sleep $SECONDS; do
clear
$*
done
I am going with the answer from here:
bash -c 'while [ 0 ]; do <your command>; sleep 5; done'
But you're really better off installing watch as this isn't very clean...
If watch doesn't want to install via
brew install watch
There is another similar/copy version that installed and worked perfectly for me
brew install visionmedia-watch
https://github.com/tj/watch
Or, in your ~/.bashrc file:
function watch {
while :; do clear; date; echo; $#; sleep 2; done
}
To prevent flickering when your main command takes perceivable time to complete, you can capture the output and only clear screen when it's done.
function watch {while :; do a=$($#); clear; echo "$(date)\n\n$a"; sleep 1; done}
Then use it by:
watch istats
Try this:
#!/bin/bash
# usage: watch [-n integer] COMMAND
case $# in
0)
echo "Usage $0 [-n int] COMMAND"
;;
*)
sleep=2;
;;
esac
if [ "$1" == "-n" ]; then
sleep=$2
shift; shift
fi
while :;
do
clear;
echo "$(date) every ${sleep}s $#"; echo
$#;
sleep $sleep;
done
Here's a slightly changed version of this answer that:
checks for valid args
shows a date and duration title at the top
moves the "duration" argument to be the 1st argument, so complex commands can be easily passed as the remaining arguments.
To use it:
Save this to ~/bin/watch
execute chmod 700 ~/bin/watch in a terminal to make it executable.
try it by running watch 1 echo "hi there"
~/bin/watch
#!/bin/bash
function show_help()
{
echo ""
echo "usage: watch [sleep duration in seconds] [command]"
echo ""
echo "e.g. To cat a file every second, run the following"
echo ""
echo " watch 1 cat /tmp/it.txt"
exit;
}
function show_help_if_required()
{
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
}
function require_numeric_value()
{
REG_EX='^[0-9]+$'
if ! [[ $1 =~ $REG_EX ]] ; then
show_help
fi
}
show_help_if_required $1
require_numeric_value $1
DURATION=$1
shift
while :; do
clear
echo "Updating every $DURATION seconds. Last updated $(date)"
bash -c "$*"
sleep $DURATION
done
Use the Nix package manager!
Install Nix, and then do nix-env -iA nixpkgs.watch and it should be available for use after the completing the install instructions (including sourcing . "$HOME/.nix-profile/etc/profile.d/nix.sh" in your shell).
The watch command that's available on Linux does not exist on macOS. If you don't want to use brew you can add this bash function to your shell profile.
# execute commands at a specified interval of seconds
function watch.command {
# USAGE: watch.commands [seconds] [commands...]
# EXAMPLE: watch.command 5 date
# EXAMPLE: watch.command 5 date echo 'ls -l' echo 'ps | grep "kubectl\\\|node\\\|npm\\\|puma"'
# EXAMPLE: watch.command 5 'date; echo; ls -l; echo; ps | grep "kubectl\\\|node\\\|npm\\\|puma"' echo date 'echo; ls -1'
local cmds=()
for arg in "${#:2}"; do
echo $arg | sed 's/; /;/g' | tr \; \\n | while read cmd; do
cmds+=($cmd)
done
done
while true; do
clear
for cmd in $cmds; do
eval $cmd
done
sleep $1
done
}
https://gist.github.com/Gerst20051/99c1cf570a2d0d59f09339a806732fd3

Here document as an argument to bash function

Is it possible to pass a here document as a bash function argument, and in the function have the parameter preserved as a multi-lined variable?
Something along the following lines:
function printArgs {
echo arg1="$1"
echo -n arg2=
cat <<EOF
$2
EOF
}
printArgs 17 <<EOF
18
19
EOF
or maybe:
printArgs 17 $(cat <<EOF
18
19
EOF)
I have a here document that I want to feed to ssh as the commands to execute, and the ssh session is called from a bash function.
The way to that would be possible is:
printArgs 17 "$(cat <<EOF
18
19
EOF
)"
But why would you want to use a heredoc for this? heredoc is treated as a file in the arguments so you have to (ab)use cat to get the contents of the file, why not just do something like:
print Args 17 "18
19"
Please keep in mind that it is better to make a script on the machine you want to ssh to and run that then trying some hack like this because bash will still expand variables and such in your multiline argument.
If you're not using something that will absorb standard input, then you will have to supply something that does it:
$ foo () { while read -r line; do var+=$line; done; }
$ foo <<EOF
a
b
c
EOF
Building on Ned's answer, my solution allows the function to take its input as an argument list or as a heredoc.
printArgs() (
[[ $# -gt 0 ]] && exec <<< $*
ssh -T remotehost
)
So you can do this
printArgs uname
or this
printArgs << EOF
uname
uptime
EOF
So you can use the first form for single commands and the long form for multiple commands.
xargs should do exactly what you want. It convert standard input to argument for a command (notice -0 allow to preserve newlines)
$ xargs -0 <<EOF printArgs 17
18
19
EOF
But for you special case, I suggest you to send command on standard input of ssh:
$ ssh host <<EOF
ls
EOF
One way to feed commands to ssh through a here doc and a function is as so:
#!/bin/sh
# define the function
printArgs() {
echo "$1"
ssh -T remotehost
}
# call it with a here document supplying its standard input
printArgs 17 <<EOF
uname
uptime
EOF
The results:
17
Linux remotehost 2.6.32-5-686 ...
Last login: ...
No mail.
Linux
16:46:50 up 4 days, 17:31, 0 users, load average: 0.06, 0.04, 0.01

How to trick an application into thinking its stdout is a terminal, not a pipe

I'm trying to do the opposite of "Detect if stdin is a terminal or pipe?".
I'm running an application that's changing its output format because it detects a pipe on STDOUT, and I want it to think that it's an interactive terminal so that I get the same output when redirecting.
I was thinking that wrapping it in an expect script or using a proc_open() in PHP would do it, but it doesn't.
Any ideas out there?
Aha!
The script command does what we want...
script --return --quiet -c "[executable string]" /dev/null
Does the trick!
Usage:
script [options] [file]
Make a typescript of a terminal session.
Options:
-a, --append append the output
-c, --command <command> run command rather than interactive shell
-e, --return return exit code of the child process
-f, --flush run flush after each write
--force use output file even when it is a link
-q, --quiet be quiet
-t[<file>], --timing[=<file>] output timing data to stderr or to FILE
-h, --help display this help
-V, --version display version
Based on Chris' solution, I came up with the following little helper function:
faketty() {
script -qfc "$(printf "%q " "$#")" /dev/null
}
The quirky looking printf is necessary to correctly expand the script's arguments in $# while protecting possibly quoted parts of the command (see example below).
Usage:
faketty <command> <args>
Example:
$ python -c "import sys; print(sys.stdout.isatty())"
True
$ python -c "import sys; print(sys.stdout.isatty())" | cat
False
$ faketty python -c "import sys; print(sys.stdout.isatty())" | cat
True
The unbuffer script that comes with Expect should handle this ok. If not, the application may be looking at something other than what its output is connected to, eg. what the TERM environment variable is set to.
Referring previous answer, on Mac OS X, "script" can be used like below...
script -q /dev/null commands...
But, because it may replace "\n" with "\r\n" on the stdout, you may also need script like this:
script -q /dev/null commands... | perl -pe 's/\r\n/\n/g'
If there are some pipe between these commands, you need to flush stdout. for example:
script -q /dev/null commands... | ruby -ne 'print "....\n";STDOUT.flush' | perl -pe 's/\r\n/\n/g'
I don't know if it's doable from PHP, but if you really need the child process to see a TTY, you can create a PTY.
In C:
#include <stdio.h>
#include <stdlib.h>
#include <sysexits.h>
#include <unistd.h>
#include <pty.h>
int main(int argc, char **argv) {
int master;
struct winsize win = {
.ws_col = 80, .ws_row = 24,
.ws_xpixel = 480, .ws_ypixel = 192,
};
pid_t child;
if (argc < 2) {
printf("Usage: %s cmd [args...]\n", argv[0]);
exit(EX_USAGE);
}
child = forkpty(&master, NULL, NULL, &win);
if (child == -1) {
perror("forkpty failed");
exit(EX_OSERR);
}
if (child == 0) {
execvp(argv[1], argv + 1);
perror("exec failed");
exit(EX_OSERR);
}
/* now the child is attached to a real pseudo-TTY instead of a pipe,
* while the parent can use "master" much like a normal pipe */
}
I was actually under the impression that expect itself does creates a PTY, though.
Updating #A-Ron's answer to
a) work on both Linux & MacOs
b) propagate status code indirectly (since MacOs script does not support it)
faketty () {
# Create a temporary file for storing the status code
tmp=$(mktemp)
# Ensure it worked or fail with status 99
[ "$tmp" ] || return 99
# Produce a script that runs the command provided to faketty as
# arguments and stores the status code in the temporary file
cmd="$(printf '%q ' "$#")"'; echo $? > '$tmp
# Run the script through /bin/sh with fake tty
if [ "$(uname)" = "Darwin" ]; then
# MacOS
script -Fq /dev/null /bin/sh -c "$cmd"
else
script -qfc "/bin/sh -c $(printf "%q " "$cmd")" /dev/null
fi
# Ensure that the status code was written to the temporary file or
# fail with status 99
[ -s $tmp ] || return 99
# Collect the status code from the temporary file
err=$(cat $tmp)
# Remove the temporary file
rm -f $tmp
# Return the status code
return $err
}
Examples:
$ faketty false ; echo $?
1
$ faketty echo '$HOME' ; echo $?
$HOME
0
embedded_example () {
faketty perl -e 'sleep(5); print "Hello world\n"; exit(3);' > LOGFILE 2>&1 </dev/null &
pid=$!
# do something else
echo 0..
sleep 2
echo 2..
echo wait
wait $pid
status=$?
cat LOGFILE
echo Exit status: $status
}
$ embedded_example
0..
2..
wait
Hello world
Exit status: 3
Too new to comment on the specific answer, but I thought I'd followup on the faketty function posted by ingomueller-net above since it recently helped me out.
I found that this was creating a typescript file that I didn't want/need so I added /dev/null as the script target file:
function faketty { script -qfc "$(printf "%q " "$#")" /dev/null ; }
There's also a pty program included in the sample code of the book "Advanced Programming in the UNIX Environment, Second Edition"!
Here's how to compile pty on Mac OS X:
man 4 pty # pty -- pseudo terminal driver
open http://en.wikipedia.org/wiki/Pseudo_terminal
# Advanced Programming in the UNIX Environment, Second Edition
open http://www.apuebook.com
cd ~/Desktop
curl -L -O http://www.apuebook.com/src.tar.gz
tar -xzf src.tar.gz
cd apue.2e
wkdir="${HOME}/Desktop/apue.2e"
sed -E -i "" "s|^WKDIR=.*|WKDIR=${wkdir}|" ~/Desktop/apue.2e/Make.defines.macos
echo '#undef _POSIX_C_SOURCE' >> ~/Desktop/apue.2e/include/apue.h
str='#include <sys/select.h>'
printf '%s\n' H 1i "$str" . wq | ed -s calld/loop.c
str='
#undef _POSIX_C_SOURCE
#include <sys/types.h>
'
printf '%s\n' H 1i "$str" . wq | ed -s file/devrdev.c
str='
#include <sys/signal.h>
#include <sys/ioctl.h>
'
printf '%s\n' H 1i "$str" . wq | ed -s termios/winch.c
make
~/Desktop/apue.2e/pty/pty ls -ld *
I was trying to get colors when running shellcheck <file> | less on Linux, so I tried the above answers, but they produce this bizarre effect where text is horizontally offset from where it should be:
In ./all/update.sh line 6:
for repo in $(cat repos); do
^-- SC2013: To read lines rather than words, pipe/redirect to a 'while read' loop.
(For those unfamiliar with shellcheck, the line with the warning is supposed to line up with the where the problem is.)
In order to the answers above to work with shellcheck, I tried one of the options from the comments:
faketty() {
0</dev/null script -qfc "$(printf "%q " "$#")" /dev/null
}
This works. I also added --return and used long options, to make this command a little less inscrutable:
faketty() {
0</dev/null script --quiet --flush --return --command "$(printf "%q " "$#")" /dev/null
}
Works in Bash and Zsh.

Resources