Expand An Alias That Executes Another Alias (Nested Alias) - bash

I have two aliases:
alias ls="ls -G"
alias la="ls -aFhlT"
I know that after you type your alias, but before you execute, you can type Meta-Control-e (probably Alt-Control-e, but possibly Esc-Control-e) to expand what you've typed.
So, if I expand my alias la using this method I get:
ls -aFhlT
However, what I really want is to see:
ls -G -aFhlT
Is there any way to achieve this besides typing Meta-Control-e a second time?
--OR--
Is there any way to confirm that my execution of la actually executed ls -G -aFhlT (other than knowing how nested aliases work and trusting that it did what I think it did)?
I'm trying to do this on macOS, but a general bash solution will also be accepted.

This question rides the fine line between using an alias and using a function. When aliases get even slightly complicated, it is generally better to write a function instead. That being said, I did find a solution for this question that allows for expanding aliases as desired.
I wrote a bash function for this:
xtrace() {
local eval_cmd
printf -v eval_cmd '%q ' "${#}"
{ set -x
eval "${eval_cmd}"
} 2>&1 | grep '^++'
return "${PIPESTATUS[0]}"
}
The -v flag of printf will store the output of printf in the specified variable.
The printf format string %q will print the associated argument ($# in this case) shell-quoted, reusable as input. This eliminates the dangers associated with passing arbitrary code/commands to eval.
I then use a command group { ... } so I can control the functionality of set -x, which tells bash to print a trace of all executed commands. For my purposes, I do not care about any output except for the fully expanded command, so I redirect stderr and grep for the output line that starts with "++". This will be the line that shows the fully expanded command.
Finally, I return the value of PIPESTATUS[0], which contains the return code of the last command executed in the command group (i.e. the eval command).
Thus, we will get something like the following:
$ xtrace la; echo $?
++ ls -G -aFhlT
0
Much thanks to #CharlesDuffy for the set -x recommendation as well as the input sanitation for eval.

Related

Generating parameters for `docker run` through command expansion from .env files

I'm facing some problems to pass some environment parameters to docker run in a relatively generic way.
Our first iteration was to load a .env file into the environment via these lines:
set -o allexport;
. "${PROJECT_DIR}/.env";
set +o allexport;
And then manually typing the --env VARNAME=$VARNAME as options for the docker run command. But this can be quite annoying when you have dozens of variables.
Then we tried to just pass the file, with --env-file .env, and it seems to work, but it doesn't, because it does not play well with quotes around the variable values.
Here is where I started doing crazy/ugly things. The basic idea was to do something like:
set_docker_parameters()
{
grep -v '^$' "${PROJECT_DIR}/.env" | while IFS= read -r LINE; do
printf " -e %s" "${LINE}"
done
}
docker run $(set_docker_parameters) --rm image:label command
Where the parsed lines are like VARIABLE="value", VARIABLE='value', or VARIABLE=value. Blank lines are discarded by the piped grep.
But docker run complains all the time about not being called properly. When I expand the result of set_docker_parameters I get what I expected, and when I copy its result and replace $(set_docker_parameters), then docker run works as expected too, flawless.
Any idea on what I'm doing wrong here?
Thank you very much!
P.S.: I'm trying to make my script 100% POSIX-compatible, so I'll prefer any solution that does not rely on Bash-specific features.
Based on the comments of #jordanm I devised the following solution:
docker_run_wrapper()
{
# That's not ideal, but in any case it's not directly related to the question.
cmd=$1
set --; # Unset all positional arguments ($# will be emptied)
# We don't have arrays (we want to be POSIX compatible), so we'll
# use $# as a sort of substitute, appending new values to it.
grep -v '^$' "${PROJECT_DIR}/.env" | while IFS= read -r LINE; do
set -- "$#" "--env";
set -- "$#" "${LINE}";
done
# We use $# in a clearly non-standard way, just to expand the values
# coming from the .env file.
docker run "$#" "image:label" /bin/sh -c "${cmd}";
}
Then again, this is not the code I wrote for my particular use case, but a simplification that shows the basic idea. If you can rely on having Bash, then it could be much cleaner, by not overloading $# and using arrays.

Is there a way to prevent injection attacks when building a command-line from untrusted input in bash?

I have a situation where a Bash script runs and parses a user-supplied JSON file using jq. Since it's supplied by the user, it's possible for them to include values in the JSON to perform an injection attack.
I'd like to know if there's a way to overcome this. Please note, the setup of: 'my script parsing a user-supplied JSON file' cannot be changed, as it's out of my control. Only thing I can control is the Bash script.
I've tried using jq with and without the -r flag, but in each case, I was successfully able to inject.
Here's what the Bash script looks like at the moment:
#!/bin/bash
set -e
eval "INCLUDES=($(cat user-supplied.json | jq '.Include[]'))"
CMD="echo Includes are: "
for e in "${INCLUDES[#]}"; do
CMD="$CMD\\\"$e\\\" "
done
eval "$CMD"
And here is an example of a sample user-supplied.json file that demonstrates an injection attack:
{
"Include": [
"\\\";ls -al;echo\\\""
]
}
The above JSON file results in the output:
Includes are: ""
, followed by a directory listing (an actual attack would probably be something far more malicious).
What I'd like instead is something like the following to be outputted:
Includes are: "\\\";ls -al;echo\\\""
Edit 1
I used echo as an example command in the script, which probably wasn’t the best example, as then the solution is simply not using eval.
However the actual command that will be needed is dotnet test, and each array item from Includes needs to be passed as an option using /p:<Includes item>. What I was hoping for was a way to globally neutralise injection regardless of the command, but perhaps that’s not possible, ie, the technique you go for relies heavily on the actual command.
You don't need to use eval for dotnet test either. Many bash extensions not present in POSIX sh exist specifically to make eval usage unnecessary; if you think you need eval for something, you should provide enough details to let us explain why it isn't actually required. :)
#!/usr/bin/env bash
# ^^^^- Syntax below is bash-only; the shell *must* be bash, not /bin/sh
include_args=( )
IFS=$'\n' read -r -d '' -a includes < <(jq -r '.Include[]' user-supplied.json && printf '\0')
for include in "${includes[#]}"; do
include_args+=( "/p:$include" )
done
dotnet test "${include_args[#]}"
To speak a bit to what's going on:
IFS=$'\n' read -r -d '' -a arrayname reads up to the next NUL character in stdin (-d specifies a single character to stop at; since C strings are NUL-terminated, the first character in an empty string is a NUL byte), splits on newlines, and puts the result into arrayname.
The shorter way to write this in bash 4.0 or later is readarray -t arrayname, but that doesn't have the advantage of letting you detect whether the program generating the input failed: Because we have the && printf '\0' attached to the jq code, the NUL terminator this read expects is only present if jq succeeds, thus causing the read's exit status to reflect success only if jq reported success as well.
< <(...) is redirecting stdin from a process substitution, which is replaced with a filename which, when read from, returns the output of running the code ....
The reason we can set include_args+=( "/p:$include" ) and have it be exactly the same as include_args+=( /p:"$include" ) is that the quotes are read by the shell itself and used to determine where to perform string-splitting and globbing; they're not persisted in the generated content (and thus later passed to dotnet test).
Some other useful references:
BashFAQ #50: I'm trying to put a command in a variable, but the complex cases always fail! -- explains in depth why you can't store commands in strings without using eval, and describes better practices to use instead (storing commands in functions; storing commands in arrays; etc).
BashFAQ #48: Eval command and security issues -- Goes into more detail on why eval is widely frowned on.
You don't need eval at all.
INCLUDES=( $(jq '.Include[]' user-supplied.json) )
echo "Includes are: "
for e in "${INCLUDES[#]}"; do
echo "$e"
done
The worst that can happen is that the unquoted command substitution may perform word-splitting or pathname expansion where you don't want it (which is a problem in your original as well), but there's no possibility for arbitrary command execution.

bash to sh (ash) spoofing

I have a 3rd party generator that's part of my build process (sbt native packager). It generates a bash script to be used to run my built program.
Problem is I need to use sh (ash), not bash. So the generator cranks out a line like this:
declare -a app_mainclass=("com.mypackage.Go")
sh chokes on this as there is no 'declare' command.
Clever me--I just added these lines:
alias declare=''
alias '-a'=''
This worked on all such declarations except this one--because of the parens. sh apparently has no arrays.
Given that I cannot practically change the generator, what can I do to spoof the sh code to behaving properly? In this case I logically want to eliminate the parens. (If I do this manually in the generated output it works great.)
I was thinking of trying to define a function app_mainclass= () { app_mainclass=$1; } but sh didn't like that--complained about the (. Not sure if there's a way to include the '=' as part of the function name or not.
Any ideas of a way to trick sh into accepting this generated command (the parens)?
I hesitate to suggest it, but you might try a function declaration that uses eval to execute any assignments produced by a declare statement. I might verify that the generated declare statements are "safe" before using this. (For example, that the assigned value doesn't contain any thing that might be executed as arbitrary code by eval.)
declare () {
array_decl=
for arg; do
# Check if -a is used to declare an array
[ "$arg" = -a ] && array_decl=1
# Ignore non-assignment arguments
expr "$arg" : '.*=.*' || continue
# Split the assignment into separate name and value
IFS='=' read -r name value <<EOF
$arg
EOF
# If it's an array assignment, strip the leading and trailing parentheses
if [ -n "array_decl" ]; then
value=${value#(}
value=${value%)}
fi
# Cross your fingers... I'm assuming `$value` was already quoted, as in your example.
eval "$name=$value"
done
}

Shell - Reading backslash in command line parameters

I'm thinking of writing a script for cygwin to cd into a windows directory which is copied from Windows explorer.
e.g.
cdw D:\working\test
equals to
cd /cygdrive/d/working/test
But it seems for shell script, all backslashs in parameters are ignored unless using single quote 'D:\working\test' or double backslashs D:\\working\\test.
But in my case it would be very inconvenience because I can't simply paste the directory name in the command line to execute the script.
Is there any way to make cdw D:\working\test working?
Well, you can do it, but you want something strange :)
cdw()
{
set $(history | tail -1 )
shift 2
path="$*"
cd $(cygpath "$path")
}
Example of usage:
$ cdw D:\working\test
$ pwd
/cygdrive/d/working/test
The main point here is the usage of history.
You don't use an argument directly, but get it from the history in the form it was typed.
$ rawarg() { set $(history | tail -1 ); shift 2; echo "$#"; }
$ rawarg C:\a\b\c\d
C:\a\b\c\d
Of course, you can use this trick in a interactive shell only (for obvious reasons).
The problem you deal with is related to the shell. Any argument you add to cdw on the command line, will be processed by the shell before cdw gets executed.
In order to prevent that processing to happen, you need at least one level of quoting,
either by enclosing the whole string in single quotes:
cd 'D:\working\test'
or with double backslashses:
cd D:\\working\test
A separate program will not help, because the damage is already done before it runs. ;-)
However, I have a possible function for cdw, which works in my AST UWIN ksh:
function cdw { typeset dir
read -r dir?"Paste Directory Path: "
cd ${dir:?}
}
And this one works in Bash (which does not support read var?prompt):
function cdw {
typeset dir
printf "Paste Directory Path: "
read -r dir || return
cd ${dir:?}
}
For me, I just type the two single quotes around the Pasted value.
The solution to add single quotes allows to copy paste

Shell script : changing working dir and spaces in folder name

I want to make a script that takes a file path for argument, and cds into its folder.
Here is what I made :
#!/bin/bash
#remove the file name, and change every space into \space
shorter=`echo "$1" | sed 's/\/[^\/]*$//' | sed 's/\ /\\\ /g'`
echo $shorter
cd $shorter
I actually have 2 questions (I am a relative newbie to shell scripts) :
How could I make the cd "persistent" ? I want to put this script into /usr/bin, and then call it from wherever in the filesystem. Upon return of the script, I want to stay in the $shorter folder. Basically, if pwd was /usr/bin, I could make it by typing . script /my/path instead of ./script /my/path, but what if I am in an other folder ?
The second question is trickier. My script fails whenever there is a space in the given argument. Although $shorter is exactly what I want (for instance /home/jack/my\ folder/subfolder), cd fails whith the error /usr/bin/script : line 4 : cd: /home/jack/my\: no file or folder of this type. I think I have tried everything, using things like cd '$shorter' or cd "'"$shorter"'" doesn't help. What am I missing ??
Thanks a lot for your answers
in your .bashrc add the following line:
function shorter() { cd "${1%/*}"; }
% means remove the smaller pattern from the end
/* is the patern
Then in your terminal:
$ . ~/.bashrc # to refresh your bash configuration
$ type shorter # to check if your new function is available
shorter is a function
shorter ()
{
cd "${1%/*}"
}
$ shorter ./your/directory/filename # this will move to ./your/directory
The first part:
The change of directory won't be “persistent” beyond the lifetime of your script, because your script runs in a new shell process. You could, however, use a shell alias or a shell function. For example, you could embed the code in a shell function and define it in your .bash_profile or other source location.
mycdfunction () {
cd /blah/foo/"$1"
}
As for the “spaces in names” bit:
The general syntax for referring to a variable in Bourne shells is: "$var" — the "double quotes" tell the shell to expand any variables inside of them, but to group the outcome as a single parameter.
Omitting the double quotes around $var tells the shell to expand the variable, but then split the results into parameters (“words”) on whitespace. This is how the shell splits up parameters, normally.
Using 'single quotes' causes the shell to not expand any contents, but group the parameters togethers.
You can use \ (backslash-blank) to escape a space when you're typing (or in a script), but that's usually harder to read than using 'single quotes' or "double quotes"…
Note that the expansion phase includes: $variables wild?cards* {grouping,names}with-braces $(echo command substitution) and other effects.
| expansion | no expansion
-------------------------------------------------------
grouping | " " | ' '
splitting | (no punc.) | (not easily done)
For the first part, there is no need for the shorter variable at all. You can just do:
#!/bin/bash
cd "${1%/*}"
Explanation
Most shells, including bash, have what is called Parameter Expansion and they are very powerful and efficient as they allow you to manipulate variables nativly within the shell that would normally require a call to an external binary.
Two common examples of where you can use Parameter Expansion over an external call would be:
${var%/*} # replaces dirname
${var##*/} # replaces basename
See this FAQ on Parameter Expansion to learn more. In fact, while you're there might as well go over the whole FAQ
When you put your script inside /usr/bin you can call it anywhere. And to deal with whitespace in the shell just put the target between "" (but this doesn't matter !!).
Well here is a demo:
#!/bin/bash
#you can use dirname but that's not apropriate
#shorter=$(dirname $1)
#Use parameter expansion (too much better)
shorter=${1%/*}
echo $shorter
An alternate way to do it, since you have dirname on your Mac:
#!/bin/sh
cd "$(dirname "$1")"
Since you mentioned in the comments that you wanted to be able to drag files into a window and cd to them, you might want to make your script allow file or directory paths as arguments:
#!/bin/sh
[ -f "$1" ] && set "$(dirname "$1")" # convert a file to a directory
cd "$1"

Resources