How to avoid changing my bash command use in all my scipts - bash

So i have a bash command that has the following options
-v -o -T -S -I -e -t
-t has been changed to -x and -T and -e are no longer availabe.
How can i avoid changing all scripts that use this command with these options that are no longer available or have changed?

You can create a wrapper for your bash command and put it in the PATH before the other executable that is changing.
For instance:
Imaging that the changing bash command is in the /c directory and this is your PATH:
PATH=/a:/b:/c
One approach is to put the wrapper with the same name in the /a (or /b) directory - that is, in the PATH before /c. So, let's say your old script is called old and it's in the /c directory. You can create an old script in the /a directory, and have it call the other script:
COMMAND="/c/old $( sed -e "s:-x::g" -e "s:-T::g" <<< "$#" )"
$COMMAND
So the idea is to manipulate the command arguments before calling the /c/old script. This will need a bit of adjusting if the parameters are more complicated (like they can take a value). There is also likely a quoting issue, it is unlikely that quotes will survive this approach.
If you need to get more complicated, you may consider getopts as a way of parsing the parameters better in the /a/old script.
To be honest, I'm not so happy with this answer - it will not work in a general case. But you asked :) ...

Related

bash, how to dot source a downloaded file (using curl) into bash

I have .sh file that I would like to dotsource into my running environment. This does not work:
curl -s https://raw.githubusercontent.com/bla/bla/master/stuff.sh | bash
The above does not work, i.e. The script runs, but the environment variables and things inside stuff.sh are not dotsourced into the running environment. I also tried:
. curl -s https://raw.githubusercontent.com/bla/bla/master/stuff.sh | bash
curl -s https://raw.githubusercontent.com/bla/bla/master/stuff.sh | bash source
curl -s https://raw.githubusercontent.com/bla/bla/master/stuff.sh | source bash
All fail. Would appreciate knowing how this can be done?
I am not a bash expert, but if you are willing to accept some drawbacks, the easiest method to do that is without pipes. I believe that it should be possible when you separate download and sourcing:
prompt># curl -s https://raw.githubusercontent.com/bla/bla/master/stuff.sh > ./stuff.sh
prompt># . ./stuff.sh
From the bash manual (man bash), in the chapter about the builtin source command:
Read and execute commands from filename [...]
There is no mentioning about standard input as a possible source for the commands which should be sourced.
However, as hanshenrik stated in his answer, you always can use process substitution to create a temporary (and invisible on the file system) file which you can feed to source. The syntax is <(list), where <(list) is expanded to a unique file name chosen by bash, and list is a sequence of commands whose output is put into that file (the file does not appear on the file system, though).
Process substitution is documented in the bash manual (man bash) in a paragraph under that exact caption.
try
source <(curl -s https://raw.githubusercontent.com/bla/bla/master/stuff.sh)
i tried doing
curl -s https://raw.githubusercontent.com/bla/bla/master/stuff.sh | source /dev/stdin
but that didn't work for some reason, no idea why (anyone knows?)

command substitution not working in alias?

I wanted to make an alias for launching a vim session with all the c/header/makefiles, etc loaded into the buffer.
shopt -s extglob
alias vimc="files=$(ls -A *.?(c|h|mk|[1-9]) .gitconfig [mM]akefile 2>/dev/null); [[ -z $files ]] || vim $files"
When I run the command enclosed within the quotations from the shell, it works but when run as the alias itself, it does not. Running vimc, causes vim to launch only in the first matched file(which happens to be the Makefile) and the other files(names) are executed as commands for some reason(of course unsuccessfully). I tried fiddling around and it seems that the command substitution introduces the problem. Because running only the ls produces expected output.
I cannot use xargs with vim because it breaks the terminal display.
Can anyone explain what might be causing this ?
Here is some output:
$ ls
Makefile readme main.1 main.c header.h config.mk
$ vimc
main.1: command not found
main.c: command not found
.gitignore: command not found
header.h: command not found
config.mk: command not found
On an related note, would it be possible to do what I intend to do above in a "single line", i.e without storing it into a variable files and checking to see if it is empty, using only the output stream from ls?

Running an if statement in shell script as a single line with docker -c option

I need to run below code as a single line in docker run -it image_name -c \bin\bash --script with --script below
(dir and dockerImageName being parameters)
'''cd ''' + dir+ ''' \
&& if make image ''' + dockerImageName''' 2>&1 | grep -m 1 "No rule to make target"; then
exit 1
fi'''
How can this be run as a single line?
You can abstract all of this logic into your higher-level application. If you can't do this, write a standard shell script and COPY it into your image.
The triple quotes look like Python syntax. You can break this up into three parts:
The cd $dir part specifies the working directory for the subprocess;
make ... is an actual command to run;
You're inspecting its output for some condition.
In Python, you can call subprocess.run() with an array of arguments and specify these various things at the application level. The array of arguments isn't reinterpreted by a shell and so protects you from this particular security issue. You might run:
completed = subprocess.run(['make', 'image', dockerImageName],
cwd=dir,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
if 'No rule to make target' in completed.stdout:
...
If you need to do this as a shell script, doing it as a proper shell script and making sure to quote your arguments again protects you.
#!/bin/sh
set -e
cd "$1"
if make image "$2" 2>&1 | grep -m 1 "No rule to make target"; then
exit 1
fi
You should never construct a command line by combining strings in the way you've shown. This makes you vulnerable to a shell injection attack. Especially if an attacker knows that the user has permissions to run docker commands, they can set
dir = '.; docker run --rm -v /:/host busybox cat /host/etc/shadow'
and get a file of encrypted passwords they can crack at their leisure. Pretty much anything else is possible once the attacker uses this technique to get unlimited root-level read/write access to the host filesystem.

csh doesn't recognize command with command line options beginning with --

I have an rsync command in my csh script like this:
#! /bin/csh -f
set source_dir = "blahDir/blahBlahDir"
set dest_dir = "foo/anotherFoo"
rsync -av --exclude=*.csv ${source_dir} ${dest_dir}
When I run this I get the following error:
rsync: No match.
If I remove the --exclude option it works. I wrote the equivalent script in bash and that works as expected
#/bin/bash -f
source_dir="blahDir/blahBlahDir"
dest_dir="foo/anotherFoo"
rsync -av --exclude=*.csv ${source_dir} ${dest_dir}
The problem is that this has to be done in csh only. Any ideas on how I can get his to work?
It's because csh is trying to expand --exclude=*.csv into a filename, and complaining because it cannot find a file matching that pattern.
You can get around this by enclosing the option in quotes:
rsynv -rv '--exclude=*.csv' ...
or escaping the asterisk:
rsynv -rv --exclude=\*.csv ...
This is a consequence of the way csh and bash differ in their default treatment of arguments with wildcards that don't match a file. csh will complain while bash will simply leave it alone.
You may think bash has chosen the better way but that's not necessarily so, as shown in the following transcript where you have a file matching the argument:
pax> touch -- '--file=xyzzy.csv' ; ls -- *.csv
--file=xyzzy.csv
pax> echo --file=*.csv
--file=xyzzy.csv
You can see there that the bash shell expands the argument rather than giving it to the program as is. Both sides have their pros and cons.

Dynamic command execution with lftp - multiple commands

I'm sure there is a simple way to do this, but I am not finding it. What I want to do is execute a series of commands using lftp, and I want to avoid repeatedly connecting to the server if possible.
Basically, I have a file with a list full of ftp directories on the server. I want to connect to the server then execute something like the following: (assume at this point that I have already converted the text file into an array of lines using cat)
for f in "${myarray}"
do
cd $f;
nlist >> $f.txt;
cd ..;
done
Of course that doesn't work, but I have to imagine there is a simple solution to what I am trying to accomplish.
I am quite inexperienced when it comes to shell scripting. Any suggestions?
First build a string that contains the list of lftp commands. Then call lftp, passing the command on its standard input. Lftp itself can redirect the output of a command to a file, with a syntax that resembles the shell.
list_commands=""
for dir in "${myarray[#]}"; do
list_commands="$list_commands
cd \"$dir\"
nlist >\"$dir.txt\"
cd .."
done
lftp <<EOF
open -u $username,$password $site
$list_commands
bye
EOF
Note that I assume that the directory names don't contain backslashes, single quotes or globbing characters. Add proper escaping if necessary.
By the way, to read lines from a file, see Why is while IFS= read used so often, instead of IFS=; while read..?. You might prefer to combine reading from the list of directories and building the commands:
list_commands=""
while IFS= read -r dir; do
list_commands="$list_commands
cd \"$dir\"
nlist >\"$dir.txt\"
cd .."
done <directory_list.txt

Resources