Alternatives to BASH/SHELL or ZSHRC - bash

I want an advanced shell or command line in Unix which has the following features:
output to err and out are in different colours.
I should be able to highlight (or find) keywords in the output of the executing command.
indicator in the OS task-bar/title as a command is running or completed.
I am looking at an advanced shell that boosts productivity. Is there any alternative?

Re: output to err and out are in different colours ... can be done in Bash.
# colourize stderr in current shell
# note: use sed in line-buffering mode
(
exec 2> >(sed -l -e $'s/.*/\033[31m&\033[m/')
ls -ld / xxxxx
)
# colourize stderr & stdout in current shell
(
exec 1> >(sed -l -e $'s/.*/\033[32m&\033[m/') 2> >(sed -l -e $'s/.*/\033[31m&\033[m/')
ls -ld / xxxxx
)

That isn't a trivial proposition.
There are shells that work with the terminal to echo the currently executing command in the title bar, such as bash on MacOS X.
The commands are autonomous and do not, in general, colour-code their output. So, to get colour-coded output, the shell will have to capture the error outputs of the commands it runs, and arrange to display that information appropriately colour-coded.
Searching the output requires the terminal program to keep the output it displays in a searchable form, and some program (probably the terminal program or possibly the shell) will have to respond to searching operations.

Emacs allows you to run interactive shells such as bash or zsh.
rc works similarly when run in Plan9 -- I'm not sure about its Unix ports.

Related

Can't run "compgen -c" from perl script

I want to check if a command exists on my machine (RedHat) inside a perl script.
Im trying to check if compgen -c contains the desired command, but running it from inside a script just gives me an empty output. Other commands work fine.
example.pl:
my $x = `compgen -c`;
print $x;
# empty output
my $y = `ls -a`;
print $y;
# .
# ..
# example.pl
Are there possible solutions for this? Or is there a better way to check for commands on my machine?
First, Perl runs external commands using /bin/sh, which is nowadays a link to a shell that is a default-of-sorts on your system. Much of the time that is bash, but not always; on RedHat it is.
This compgen is a bash builtin. One way to discover that is to run man compgen (in bash) -- and the bash manual pops up. Another way is type as Dave shows.
To use builtins we generally need to run an explicit shell for them, and they have a varied behavior in regards to whether the shell is "interactive" or not.† I can't find a discussion of that in bash documentation for this builtin but experimentation reveals that you need
my #completions = qx(bash -c "compgen -c")
The quotes are needed so to pass a complete command to a shell that will be started.
Note that this way you don't catch any STDERR out of those commands. That will come out on the terminal, and it can get missed that way. Or, you can redirect that stream in the command, by adding 2>&1 (redirect to STDOUT) at the end of it.
This is one of the reasons to use one of a number of good libraries for running and managing external commands instead of the builtin "backticks" (the qx I use above is an operator form of it.)
† This can be facilitated with -i
my #output_lines = qx(bash -i -c "command with arguments")
It's because compgen is a bash built-in command, not an external command. And when you run a command using backticks, you get your system's default shell - which is probably going to be /bin/sh, not bash.
The solution is to explicitly run bash, using the -c command-line option to give it a command to run.
my $x = `bash -c compgen -c`;
From a bash prompt, you can use type to see how a command is implemented.
$ type ssh
ssh is /usr/bin/ssh
$ type compgen
compgen is a shell builtin

How to hide output error messages from terminal? [duplicate]

I have a Bash script that runs a program with parameters. That program outputs some status (doing this, doing that...). There isn't any option for this program to be quiet. How can I prevent the script from displaying anything?
I am looking for something like Windows' "echo off".
The following sends standard output to the null device (bit bucket).
scriptname >/dev/null
And if you also want error messages to be sent there, use one of (the first may not work in all shells):
scriptname &>/dev/null
scriptname >/dev/null 2>&1
scriptname >/dev/null 2>/dev/null
And, if you want to record the messages, but not see them, replace /dev/null with an actual file, such as:
scriptname &>scriptname.out
For completeness, under Windows cmd.exe (where "nul" is the equivalent of "/dev/null"), it is:
scriptname >nul 2>nul
Something like
script > /dev/null 2>&1
This will prevent standard output and error output, redirecting them both to /dev/null.
An alternative that may fit in some situations is to assign the result of a command to a variable:
$ DUMMY=$( grep root /etc/passwd 2>&1 )
$ echo $?
0
$ DUMMY=$( grep r00t /etc/passwd 2>&1 )
$ echo $?
1
Since Bash and other POSIX commandline interpreters does not consider variable assignments as a command, the present command's return code is respected.
Note: assignement with the typeset or declare keyword is considered as a command, so the evaluated return code in case is the assignement itself and not the command executed in the sub-shell:
$ declare DUMMY=$( grep r00t /etc/passwd 2>&1 )
$ echo $?
0
Try
: $(yourcommand)
: is short for "do nothing".
$() is just your command.
Like andynormancx' post, use this (if you're working in an Unix environment):
scriptname > /dev/null
Or you can use this (if you're working in a Windows environment):
scriptname > nul
This is another option
scriptname |& :
Take a look at this example from The Linux Documentation Project:
3.6 Sample: stderr and stdout 2 file
This will place every output of a program to a file. This is suitable sometimes for cron entries, if you want a command to pass in absolute silence.
rm -f $(find / -name core) &> /dev/null
That said, you can use this simple redirection:
/path/to/command &>/dev/null
In your script you can add the following to the lines that you know are going to give an output:
some_code 2>>/dev/null
Or else you can also try
some_code >>/dev/null

Capture historical process history UNIX?

I'm wondering if there a way of capturing a list of the processes executed on a non-interactive shell?
Basically I have a script which calls some variables from other sources and I want to see what the values of said variables are. However, the script executes and finishes very quickly so I can't capture the values using ps.
Is there a way to log processes and what arguments were used?
TIA
Huskie
EDIT:
I'm using Solaris in this instance. I even thought about about having a quick looping script to capture the values being passed - but this doesn't seem very accurate and I'm sure executions aren't always being captured.
I tried this:
#!/bin/ksh
while [ true ]
do
ps -ef | grep $SCRIPT_NAME |egrep -v 'shl|lis|grep' >> grep_out.txt
done
I'd use sleep but I can't specify any precision as all my sleep executables want an integer value rather than any fractional value.
On Solaris:
truss -s!all -daDf -t exec yourCommand 2>&1 | grep -v ENOENT
On AIX and possibly other System V based OSes:
truss -s!all -daDf -t execve yourCommand 2>&1 | grep -v ENOENT
On Linux and other OSes supporting strace, you can use this command:
strace -ff -etrace=execve yourCommand 2>&1 >/dev/tty | grep -v ENOENT
In case the command you want to trace is already running, you can replace yourCommand by -p pid with pid being the process to be traced process id.
EDIT:
Here is a way to trace your running script(s) under Solaris:
for pid in $(pgrep -f $SCRIPT_NAME); do
truss -s!all -daDf -t exec -p $pid 2>&1 | grep -v ENOENT > log.$pid.out &
done
Note that with Solaris, you might also use dtrace to get the same (and more).
Most shells can be invoked in debug mode, where each statement being executed is printed to stdout (or stderr) after variable substitution and expansion.
For Bourne like shells (sh, bash), debug is enabled with the -x option (as in bash -x myscript) or using the set -x statement within the script itself.
However, debugging only works for the 'current' script. If the script calls other scripts, these other scripts will not execute in debug mode. Furthermore, the code inside functions may not be executed in debug mode either - depends on the specific shell - although you can use set -x within a function to enable debug explicitly.
A very much more verbose (at least by default) option is to use something like strace for this.
strace -f -o trace.out script.sh
will give you huge amounts of information about what the script is doing. For your specific usage you will likely want to limit the output a bit with the -e trace=.... option to control which system calls are traced.
Use truss instead of strace on Solaris. Use dtruss on OS X (I believe). With appropriate command line argument changes as well.

Piping `cd` or `popd` output prevents changing directories?

I understand that since | initiates a new process for the command(s) after the pipe, any shell command of the form cmd | cd newdir (where cmd does not change the current working directory) will leave the original process's working directory unchanged. (Not to mention that this is a bit silly since cd doesn't read input from stdin.)
However, on my machine (a CentOS 6 box, using bash, ksh, or zsh), it appears that the following command also fails to change directories:
cd newdir | cat
(Please ignore how silly it is to pipe output to cat here; I'm just trying to make a simple example.)
Why is this? Is there a way around this problem? Specifically, I'm trying to write an alias that uses popd but catches the output, discards stdout, and re-outputs stderr.
(For the curious, this is my current, non-working alias: popd 2>&1 >/dev/null | toerr && lsd. Here, toerr just catches stdin, outputs it to stderr, and returns the number of lines read/printed. lsd is a directory-name-and-contents printer that should only execute if the popd is successful. The reason I'm sending stderr to stdout, piping it, catching it, and re-outputting it on stderr is just to get it colored red using stderred, since my shell session isn't loaded with LD_PRELOAD, so Bash built-ins such as popd don't get the red-colored stderr.)
In bash, dash and ash, each command in a pipeline runs in a subshell.
In zsh, ksh, and bash with shopt -s lastpipe, all except the last command in the pipeline run in subshells.
Since cd -- as well as variables, shell options, ulimits and new file descriptors -- only affects the current process, their effects will not affect the parent shell.
Examples:
# Doesn't change directory
cd foo | cat
pwd
# Doesn't set $bar on default bash (but does on zsh and ksh)
echo foo | read bar
echo "$bar"
# Doesn't change the ulimit
ulimit -c 10000 2>&1 | grep "not permitted"
ulimit -c
The same also applies to other things that generate subshells. None of the following will change the directory:
# Command expansion creates a subshell
echo $(cd foo); pwd
# ( .. ) creates a subshell
( cd foo ); pwd
# Backgrounding a process creates a subshell
cd foo & pwd
To fix it, you have to rewrite your code to run anything that affects the environment in the main shell process.
In your particular case, you can consider using process substitution:
popd > /dev/null 2> >(toerr) && lsd
This has the additional benefit of only running lsd when popd is successful, rather than when toerr is successful like your version does.

Copy stderr and stdout to a file as well as the screen in ksh

I'm looking for a solution (similar to the bash code below) to copy both stdout and stderr to a file in addition to the screen within ksh on Solaris.
The following code works great in the bash shell:
#!/usr/bin/bash
# Clear the logfile
>logfile.txt
# Redirect all script output to a logfile as well as their normal locations
exec > >(tee -a logfile.txt)
exec 2> >(tee -a logfile.txt >&2)
date
ls -l /non-existent/path
For some reason this is throwing a syntax error on Solaris. I assume it's because I can't do process substitution, and I've seen some posts suggesting the use of mkfifo, but I've yet to come up with a working solution.
Does anyone know of a way that all output can be redirected to a file in addition to the default locations?
Which version of ksh are you using? The >() is not supported in ksh88, but is supported in ksh93 - the bash code should work unchanged (aside from the #! line) on ksh93.
If you are stuck with ksh88 (poor thing!) then you can emulate the bash/ksh93 behaviour using a named pipe:
#!/bin/ksh
# Clear the logfile
>logfile.txt
pipe1="/tmp/mypipe1.$$"
pipe2="/tmp/mypipe2.$$"
trap 'rm "$pipe1" "$pipe2"' EXIT
mkfifo "$pipe1"
mkfifo "$pipe2"
tee -a logfile.txt < "$pipe1" &
tee -a logfile.txt >&2 < "$pipe2" &
# Redirect all script output to a logfile as well as their normal locations
exec >"$pipe1"
exec 2>"$pipe2"
date
ls -l /non-existent/path
The above is a second version to enable stderr to be redirected to a different file.
How about this:
(some commands ...) 2>&1 | tee logfile.txt
Add -a to the tee command line for subsequent invocations to append rather than overwrite.
In ksh, the following works very well for me
LOG=log_file.$(date +%Y%m%d%H%M%S).txt
{
ls
date
... whatever command
} 2>&1 | tee -a $LOG

Resources