Create shell alias with semi-colon character - macos

I've noticed that I have a tendency to mistype ls as ;s, so I decided that I should just create an alias so that instead of throwing an error, it just runs the command I mean.
However, knowing that the semi-colon character has a meaning in shell scripts/commands, is there any way to allow me to create an alias with that the semi-colon key? I've tried the following to no avail:
alias ;s=ls
alias ";s"=ls
alias \;=ls
Is it possible to use the semi-colon as a character in a shell alias? And how do I do so in ZSH?

First and foremost: Consider solving your problem differently - fighting the shell grammar this way is asking for trouble.
As far as I can tell, while you can define such a command - albeit not with an alias - the only way to call it is quoted, e.g. as \;s - which defeats the purpose; read on for technical details.
An alias won't work: while zsh allows you to define it (which, arguably, it shouldn't), the very mechanism that would be required to call it - quoting - is also the very mechanism that bypasses aliases and thus prevents invocation.
You can, however, define a function (zsh only) or a script in your $PATH (works in zsh as well as in bash, ksh, and dash), as long as you invoke it quoted (e.g., as \;s or ';s' or ";s"), which defeats the purpose.
For the record, here are the command definitions, but, again, they can only be invoked quoted.
Function (works in zsh only; place in an initialization file such as ~/.zshrc):
';s'() { ls "$#" }
Executable script ;s (works in dash, bash, ksh and zsh; place in a directory in your $PATH):
#!/bin/sh
ls "$#"

Related

Read file and run command in zsh

So I generally create job files with a list of commands in it. Then I execute it like so
cat jobFile | while read a; do $a; done
Which always works in bash. However, I've just started working in Mac which apparently uses zsh. And this command fails with "no such file" etc. I've tested the job file by running few lines from it manually, so it should be fine.
I've found questions on zsh read inbut they tend to be reading in from variables e.g. $a=('a' 'b' 'c') or echo $a
Thank you for your answers!
In bash, unquoted parameter expansions always undergo word-splitting, so if a="foo bar", then $a expands to two words, foo and bar. As a command, this means running the command foo with an argument bar.
In zsh, parameter expansions to not undergo word-splitting by default, which means the same expansion $a would produce a single word foo bar, treated as the name of the command to execute.
In either case, relying on parameter expansion to "parse" a shell command is fragile; in addition to word-splitting, the expansion is subject to pathname expansion (globbing), and you are limited to simple commands and their arguments. No pipes, lists (&&, ||), or redirections allowed, as everything will be treated as the command name and a sequence of arguments.
What you want in both shells is to simply treat your job file as a shell script, which can be executed in the current shell using the . command:
. jobFile
Why are you executing it in such a cumbersome way? Assuming jobFile is a file holding a sequence of bash commands, you can simply run it as
bash jobFile
If it contains a sequence of zsh commands, you can likewise run it as
zsh jobFile
If you follow this approach, I would however reflect in the name of the job file, what shell it is intended for, i.e.
bash jobFile.bash
zsh jobFile.zsh
and, if you write a job file so that it is supposed to be compatible with either shell, I would name it jobFile.sh.

Script runs when executed but fails when sourced

Original Title: Indirect parameter substitution breaks when the script is sourced (zsh)
zsh 5.7.1 (x86_64-apple-darwin19.0)
GNU bash, version 4.4.20(1)-release (x86_64-pc-linux-gnu)
I’m developing a shell script on a Mac and I’m trying to keep it portable between bash & zsh, so array indexing is a consideration. I know that I can set KSH_ARRAYS to get indexing to start at 0, but I decided to query the OS for the shell that’s in use and set the start index accordingly, which led to the issue described below.
It made sense (to me anyway!) to use indirect expansion, which is what led to the problem. Consider the script indirect.sh:
#! /bin/bash
declare -r ARRAY_START_BASH=0
declare -r ARRAY_START_ZSH=1
declare -r SHELL_BASH=0
declare -r SHELL_ZSH=1
# Indirect expansion is used to reference the values of the variables declared
# in this case statement e.g. ${!ARRAY_START}
case $(basename $SHELL) in
"bash" )
declare -r SHELL_ID=SHELL_BASH
declare -r ARRAY_START=ARRAY_START_BASH
;;
"zsh" )
declare -r SHELL_ID=SHELL_ZSH
declare -r ARRAY_START=ARRAY_START_ZSH
;;
* )
return 1
;;
esac
echo "Shell ID: ${!SHELL_ID} Index arrays from: ${!ARRAY_START}"
It works fine when run from the command line while in the same directory:
<my home> ~ % echo "$(./indirect.sh)"
Shell ID: 1 Index arrays from: 1
Problems arise when I source the script:
<my home> ~ % echo "$(. ~/indirect.sh)"
/Users/<me>/indirect.sh:28: bad substitution
I don’t understand why sourcing the script changes the behavior of the parameter expansion.
Is this expected behavior? If so, I’d be grateful if someone could explain it and hopefully, offer a work around.
The problem described in the original post has nothing to do with indirect expansion. The difference in behavior is a result of different shells being invoked depending on whether the script is “executed” or “sourced”. These differences reveal the basic flaw in deriving the shell from the $SHELL variable that underpins the script's design. If the shell defined in $SHELL does not match the shebang, the script will fail either when sourced or executed. An explanation follows.
Indirect expansion doesn’t offer value in the given scenario because values could just as easily be assigned directly. They’ll have to be assigned that way regardless given the different syntax used for indirect expansion between shells. In fact, other syntax differences between shells makes the entire premise for detecting the shell moot! However, putting that aside, the difference in behavior is a result of different shells being invoked based on whether the script is “executed” or “sourced”. The behavior of sourcing is well documented with numerous explanations on the web, but for context here’s how it works:
Executing a Script
Use the “./“ syntax to execute a script.
When run this way, the script executes in a sub-shell. Any changes the
script makes to it’s shell are applied to the sub-shell, not the shell
in which the script was launched, so those changes are lost when the
shell exits because the sub-shell in which it executed is destroyed as
well. For example, if the script changes the working directory, it
does so in the sub-shell. The working directory of the main shell that
launched the script is unchanged when the script terminates. If you
want to make changes to the shell in which the script was launched, it
must be sourced.
Sourcing a Script
Use the “source “ syntax to source a
script. When run this way, the script essentially becomes an argument
for the source command, which handles invoking the appropriate
execution. Some shells (e.g. ksh) use a single period “.” instead of
“source”.
When a script is executed with the “./“ syntax, the shebang at the top of the file is used to determine which shell to use. When a script is sourced, the shebang is ignored and the shell in which the script is launched is used instead. Also note that the period that appears in the “./“ command syntax used to execute a script, is not related to the period that’s occasionally used as an alias for the source command.
The script in the post uses bash in the shebang statement, so it works when executed because it’s run using bash. When it’s sourced from zsh, it encounters the incorrect indirect expansion syntax:
“${!A_VAR}"
The correct syntax is:
"${(P)A_VAR}"
However, correcting the syntax won’t help because it will then fail when executed. The shebang will invoke bash and the syntax will be wrong again. That renders indirection useless for accessing a variable designed to indicate the shell in use. More importantly, a design based on querying an environment variable for the shell is flawed due to differences in the shell that’s ultimately used depending on whether the script is executed or sourced.
To add to your answer (what I'm going to say is too long for a comment), I can not think of any application, why your script could be useful if not sourced. Actually, I came accross the need of such a script by myself in exactly one occasion:
Since I use as interactive shell not only zsh, but also sometimes bash, so I have written my .zshrc and .bashrc to set up everything (including defining variables and shell functions for interactive use). In order to safe work,
I try to put code which works under both bash and zsh into a single file (say: .commonrc), and my .zshrc and .bashrc have inside them a
source .commonrc
While many things are so different in bash and zsh, that I can't put them into .commonrc, some can, provided I do some tweaking. One reason for headache is obviously the different indexing of arrays, which you seemingly try to solve. So I have also a similar feature. However, I don't nee ca case construct for this. Instead, my .bashrc looks like this (using your naming of the variables):
...
declare -r ARRAY_START=0
source .commonrc
...
and my .zshrc looks like this:
...
declare -r ARRAY_START=1
source .commonrc
...
Since it does not happen that the .bashrc is run from a zsh and vice versa, I don't need to query what kind of shell I have.

prevent script injection when spawning command line with input arguments from external source

I've got a python script that wraps a bash command line tool, that gets it's variables from external source (environment variables). is there any way to perform some soft of escaping to prevent malicious user from executing bad code in one of those parameters.
for example if the script looks like this
/bin/sh
/usr/bin/tool ${VAR1} ${VAR2}
and someone set VAR2 as follows
export VAR2=123 && \rm -rf /
so it may not treat VAR2 as pure input, and perform the rm command.
Is there any way to make the variable non-executable and take the string as-is to the command line tool as input ?
The correct and safe way to pass the values of variables VAR1 and VAR2 as arguments to /usr/bin/tool is:
/usr/bin/tool -- "$VAR1" "$VAR2"
The quotes prevent any special treatment of separator or pattern matching characters in the strings.
The -- should prevent the variable values being treated as options if they begin with - characters. You might have to do something else if tool is badly written and doesn't accept -- to terminate command line options.
See Quotes - Greg's Wiki for excellent information about quoting in shell programming.
Shellcheck can detect many cases where quotes are missing. It's available as either an online tool or an installable program. Always use it if you want to eliminate many common bugs from your shell code.
The curly braces in the line of code in the question are completely redundant, as they usually are. Some people mistakenly think that they act as quotes. To understand their use, see When do we need curly braces around shell variables?.
I'm guessing that the /bin/sh in the question was intended to be a #! /bin/sh shebang. Since the question was tagged bash, note that #! /bin/sh should not be used with code that includes Bashisms. /bin/sh may not be Bash, and even if it is Bash it behaves differently when invoked as /bin/sh rather than /bin/bash.
Note that even if you forget the quotes the line of code in the question will not cause commands (like rm -rf /) embedded in the variable values to be run at that point. The danger is that badly-written code that uses the variables will create and run commands that include the variable values in unsafe ways. See should I avoid bash -c, sh -c, and other shells' equivalents in my shell scripts? for an explanation of (only) some of the dangers.
To avoid injections at best, consider switching to [T]csh.
Unlike Bourne Shells, the C Shell is "limited", thus instructing one to take different, safer paths to write scripts. The "limitations" imposed by the C Shell make it one of the most reliable Shells to work with.
(E.g: Nesting is minimal to impossible, thus preventing injections at all costs; there are better ways to achieve what one want.)

Bash what bash alias actually is? [duplicate]

I'm surprised hasn't been asked before, but…
What is the difference between
alias ⇢ alias EXPORT='alias'
function ⇢ function exporter() { echo $EXPORT }
and
export ⇢ export ALIAS='export'
and for that matter...
alias export=$(function) (j/k)
in bash (zsh, et al.)
Specifically, I'd be most interested in knowing the lexical/practical difference between
alias this=that
and
export that=this
I have both forms... all over the place - and would prefer to stop arbitrarily choosing one, over the other. 😂
I'm sure there is a great reference to a "scopes and use-cases for unix shells", somewhere... but thought I'd post the question here, in the name of righteous-canonicalicism.
You're asking about two very different categories of things: aliases and functions define things that act like commands; export marks a variable to be exported to child processes. Let me go through the command-like things first:
An alias (alias ll='ls -l') defines a shorthand for a command. They're intended for interactive use (they're actually disabled by default in shell scripts), and are simple but inflexible. For example, any arguments you specify after the alias simply get tacked onto the end of the command; if you wanted something like alias findservice='grep "$1" /etc/services', you can't do it, because $1 doesn't do anything useful here.
A function is like a more flexible, more powerful version of an alias. Functions can take & process arguments, contain loops, conditionals, here-documents, etc... Basically, anything you could do with a shell script can be done in a function. Note that the standard way to define a function doesn't actually use the keyword function, just parentheses after the name. For example: findservice() { grep "$1" /etc/services; }
Ok, now on to shell variables. Before I get to export, I need to talk about unexported variables. Basically, you can define a variable to have some (text) value, and then if you refer to the variable by $variablename it'll be substituted into the command. This differs from an alias or function in two ways: an alias or function can only occur as the first word in the command (e.g. ll filename will use the alias ll, but echo ll will not), and variables must be explicitly invoked with $ (echo $foo will use the variable foo, but echo foo will not). More fundamentally, aliases and functions are intended to contain executable code (commands, shell syntax, etc), while variables are intended to store non-executable data.
(BTW, you should almost always put variable references inside double-quotes -- that is, use echo "$foo" instead of just echo $foo. Without double-quotes the variable's contents get parsed in a somewhat weird way that tends to cause bugs.)
There are also some "special" shell variables, that are automatically set by the shell (e.g. $HOME), or influence how the shell behaves (e.g. $PATH controls where it looks for executable commands), or both.
An exported variable is available both in the current shell, and also passed to any subprocesses (subshells, other commands, whatever). For example, if I do LC_ALL=en_US.UTF-8, that tells my current shell use the "en_US.UTF-8" locale settings. On the other hand, if I did export LC_ALL=en_US.UTF-8 that would tell the current shell and all subprocesses and commands it executes to use that locale setting.
Note that a shell variable can be marked as exported separately from defining it, and once exported it stays exported. For example, $PATH is (as far as I know) always exported, so PATH=/foo:/bar has the same effect as export PATH=/foo:/bar (although the latter may be preferred just in case $PATH somehow wasn't already exported).
It's also possible to export a variable to a particular command without defining it in the current shell, by using the assignment as a prefix for the command. For example LC_ALL=en_US.UTF-8 sort filename will tell the sort command to use the "en_US.UTF-8" locale settings, but not apply that to the current shell (or any other commands).
TL;DR:
The shell evaluation order (per POSIX) for the entities in your question is:
aliases --> variables --> command substitutions --> special built-ins --> functions --> regular built-ins
Aliases do not persist across subshells, but variables (and in Bash, functions) can be made to do so with the export command.
Regular built-ins can be overridden by writing functions that have the same name as the regular built-in (since functions expand before regular built-ins). (NOTE: If you're trying to add functionality to the regular built-in, call the built-in with command in your function definition so you don't accidentally create a recursive function.)
Variables can be made readonly with the (special built-in) readonly command, but aliases cannot.
USE CASES:
Export a variable if you need to use a variable across subshells.
Make a variable readonly if you don't want it changed for the life of the parent shell (once performed, this cannot be undone with unset; you must restart the parent shell).
If you want to override or add functionality to a regular built-in, use a function.
NOTE: If you want to be sure that you're using a special or regular built-in and not someone else's function, use builtin the_builtin, or if the shell doesn't support the builtin command, use the POSIX comand command -p the_builtin, where the -p switch tells command to use the $PATH that ships with the shell by default (in case the user has overriden path).
NOTE: A variable can be made to act like an alias that also persists across subshells and cannot be changed. For example,
#! /bin/sh
my_cmd='ls -al'
export my_cmd
readonly my_cmd
will act like
#! /bin/sh
alias my_cmd='ls -al'
so long as
my_cmd is used without double-quotes (i.e. ${my_cmd}, NOT "${my_cmd}") so it isn't treated as a single string, and
IFS is the standard space-tab-newline and not switched to something else so that the elements of my_cmd are globbed and each part separated by a space is evaluated as a single token (otherwise it will be evaluated as a single string).
Each shell (e.g. bash, zsh, ksh, yash, etc.) is a bit different, so be sure to review the reference manual for it (they each implement POSIX in a unique way, or sometimes not at all).

ZSH arguments after file name

I switched from bash to zsh and I was wondering if there was a way to put arguments after file name like in bash
Example:
cp dir1 dir2 -r
Thank you
This depends only on the command, not on the shell. The shell passes the arguments in the order they're given, and makes no special treatment for arguments beginning with a -.
zsh has some expanded/different features in the area of globbing and tab completion (two of the primary reasons folks may switch to zsh). Both of these provide you interesting way to add command line parameters. Is that what you are asking about?
Note also that most commands are not impacted by the shell you choose: ls, awk, grep, vim, etc. Obviously things like alias and function that are shell commands are potentially different.

Resources