Using ZSH, how can I replace variables from a dictionary - shell

I'm making a simple install.zsh to put in my dotfiles. It's mostly used to install stuff like ruby-gems, npm, pip and so on, and then I install the rest using those package managers.
But in order to get to that level, I still need to install those package managers using the correct platform-dependent syntax. Not to mention all the stuff that is only available in the platform-dependent package manager.
99% of this is solved using a simple function like this:
install(){
command -v brew && echo "installing $1 using Homebrew" && brew install "$1"
command -v pkg && echo "installing $1 using pkg" && sudo pkg install "$1"
command -v apt-get && echo "installing $1 using apt" && sudo apt-get install "$1"
}
1 % of the time this won't work due to brew, pkg and apt-get expecting different packagenames. For example, pkg wants dev/ruby and dev/ruby-gems; apt wants ruby-full, and brew just wants ruby.
So I need a subfunction which replaces $1 with the platform-correct package name WITHOUT a huge switch tree consisting of smaller switch trees! I can already do that, and not only do I not want to write it, but I don't want to maintain it when I add new packages... I'd rather have something like a plaintext "database" consisting of rows of four fields like this:
'ruby','ruby-full','dev/ruby,dev/ruby-gems',ruby
Or something with better syntax, it's not very important. The subfunction is more important.
Of course, if I'm trying to reinvent the wheel here, if someone can point me to a wheelwright that would be even better ;)

Since 99% of this is solved using a simple function, you probably don't want to create "database" entries for the 99 out of 100 cases where it isn't needed. For the 1 % exceptions, you could e. g. create arrays named pkgs_name (with name being the generic name of the "stuff"), which contain three elements, the different packagenames for [1] apt, [2] pkg and [3] brew.
pkgs_ruby=(ruby-full 'dev/ruby dev/ruby-gems' ruby)
Then a function, let's call it pkgs(), passed the package manager's index and the stuff's name, could check whether the array pkgs_name exists and if so return the packagename(s) for the index, otherwise just the generic name, like:
pkgs()
{
eval echo \${pkgs_$2\[$1]-$2}
}
You'd then just have to modify the install() function to take what pkgs() returns instead of just "$1":
install()
{
whence brew && echo "installing $1 using Homebrew" && brew install `pkgs 3 $1`
whence pkg && echo "installing $1 using pkg" && sudo pkg install `pkgs 2 $1`
whence apt-get && echo "installing $1 using apt" && sudo apt-get install `pkgs 1 $1`
}
So, install ruby would take the information from $pkgs_ruby and execute e. g.
sudo apt-get install ruby-full
while install java without any definitions would execute e. g.
sudo apt-get install java

Related

Debian: "command -v <command>" still returns path after removing the package?

I've uninstalled the stylus package on my Debian by sudo apt-get remove --purge node-stylus.
Now it says when I try to run the stylus command: stylus: command not found. So it works as it should.
But in my scripts I check whether Stylus is installed or not by:
if ! command -v sudo stylus &> /dev/null; then
echo "ERROR: Stylus is not installed!"
exit 1
fi
And for some reason command -v stylus still returns /usr/bin/stylus thus the script won't fail.
I checked /usr/bin/ and there is no stylus there.
Could someone explain to me please why does this work like this?
Bash maintains a cache for lookups; you want to do
hash -r stylus
to force it to forget the old value.
Separately, of course, don't use command -v sudo when you actually want command -v stylus, as already pointed out in a comment.

How to make apt assume yes and force yes for all installations in a bash script

I'm currently getting into linux and want to write a bash script which sets up a new machine just the way I want it to be.
In order to do that I want to install differnt things on it etc.
What I'm trying to achieve here is to have a setting at the top of the bash script which will make apt accept all [y/n] questions asked during the execution of the script
Question example I want to automatically accept:
After this operation, 1092 kB of additional disk space will be used. Do you want to continue? [Y/n]
I just started creating the file so here is what i have so far:
#!/bin/bash
# Constants
# Set apt to accept all [y/n] questions
>> some setting here <<
# Update and upgrade apt
apt update;
apt full-upgrade;
# Install terminator
apt install terminator
apt is meant to be used interactively. If you want to automate things, look at apt-get, and in particular its -y option:
-y, --yes, --assume-yes
Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. If an undesirable
situation, such as changing a held package, trying to install an
unauthenticated package or removing an essential package occurs then
apt-get will abort. Configuration Item: APT::Get::Assume-Yes.
See also man apt-get for many more options.
With apt:
apt -o Apt::Get::Assume-Yes=true install <package>
See: man apt and man apt.conf
If you indeed want to set it up once at the top of the file as you say and then forget about it, you can use the APT_CONFIG environment variable. See apt.conf.
echo "APT::Get::Assume-Yes=yes" > /tmp/_tmp_apt.conf
export APT_CONFIG=/tmp/_tmp_apt.conf
apt-get update
apt-get install terminator
...
You can set up API assume yes permanently as follow:
echo "APT::Get::Assume-Yes \"true\";\nAPT::Get::allow \"true\";" | sudo tee -a /etc/apt/apt.conf.d/90_no_prompt
Another easy way to set it at the top of the your script is to use the command alias apt-get="apt-get --assume-yes", which causes all subsequent invocations of apt-get to include the --assume-yes argument. For example apt-get upgrade would automatically get converted to apt-get --assume-yes upgrade" by bash.
Please note, that this may cause errors, because some apt-get subcommands do not accept the --assume-yes argument. For example apt-get help would be converted to apt-get --assume-yes help which returns an error, because the help subcommand can't be used together with --assume-yes.

How to Test a Bash File in Terminal

I've been trying to make a bash file for newbie Linux users and I wanted to know if there is a way to test the bash file before running it.
Can I just see the result of my bash file in the terminal and not actually run it?
For example, I don't want to actually update and upgrade my system when I run this script, I just want to see the result of my bash file, whether it gives me back some error or not.
Wanted to know if there is a way to just see the result, like see the result of my 'echo' commands and etc.
echo ---------------
echo hello and welcome to the automized bash file for your new linux distro!
echo ---------------
sudo apt-get update -y ; sudo apt-get upgrade -y ; sudo apt-get autoremove -y ; sudo apt-get autoclean -y ; sudo apt-get clean -y
echo ---------------
echo as you were drinking your coffee,
echo your linux distro got updated, and autocleaned as well!
Thanks in advance!
To see the results of running a bash file, a bash interpreter would have to interpret it. So the simple answer would be no.
However, if you are willing to use an online tool, you could run a bash script online. In this manner, you can see the results of running a bash script, without ever having to run it on your own machine.
A google search popped up these ones, but I cannot vouch for their legitimacy:
https://www.jdoodle.com/test-bash-shell-script-online/ (for evaluating the results of a script)
https://www.shellcheck.net/ (for assessing shell code quality)
There's no general way to run a shell script without running it. You can sometimes sort-of modify the script to make it go through the motions without actually doing anything significant, but this requires understanding the script and the commands in it.
For example, in the update script in the question, you could just add echo before each sudo apt-get command, something like this (note that I've reformatted it a bit, and added quotes around some fixed strings):
echo '---------------'
echo 'hello and welcome to the automized bash file for your new linux distro!'
echo '---------------'
echo sudo apt-get update -y
echo sudo apt-get upgrade -y
echo sudo apt-get autoremove -y
...etc...
This will simply print the commands, rather than executing them. (Note: if any commands had redirections, e.g. somecommand >outputfile or somecommand | anothercommand, the adding echo doesn't remove the redirection, so you'll need to make other changes as well).
If you want to actually see what the various apt-get commands would do if you ran them... you're in luck, because apt-get happens to have a --dry-run option (see the man page and this AskUbuntu question).
Note that this is a feature specific to apt-get. Very few shell commands have an option like this, so it's not like some sort of universal just-try-it-out switch. In fact, not even all apt-get subcommands support --dry-run.
Most relevantly, apt-get update doesn't support --dry-run! And it wouldn't be useful if it did. If you don't start by updating the package indexes -- actually updating them, not just pretending to -- then the other apt-get commands won't be able to tell what's new, and won't actually tell you what needs to be changed.
If you don't actually-for-real update the indexes, then you can't tell what the rest of the script would do if it ran for real. So you could do something like this:
...
sudo apt-get update -y
sudo apt-get upgrade --dry-run --assume-no
sudo apt-get autoremove --dry-run --assume-no
...etc...
...but be aware the script is actually executing, and while some of its effects have been disabled, others haven't.

Makefile while loop try/catch equivalent to install python dependencies first with conda then with pip

I need to run a while loop to install Python dependencies. In the Python world recently there are 2 ways to install dependencies which have become established:
using conda (for some people this is the "robust/stable/desired way", provided by a "Python distribution" called Anaconda/Miniconda),
using pip (in the last few years included as the official way of Python itself).
The "pseudocode" should be:
try to install the dependency with the conda command
if it fails then install it with the pip command
In the Python world dependencies are specified in a requirements.txt file, usually exact versions (==) as one dependency per line with the pattern <MY_DEPENDENCY>==<MY_VERSION>.
The equivalent bash desired command is: while read requirement; do conda install --yes $requirement || pip install $requirement; done < requirements.txt, however this does not work in the GNU make/Makefile world for reasons that I don't completely get.
I've tried a few different flavors of that while loop - all unsuccessful. Basically once the conda command fails I am not able to go on with the pip attempt. I am not sure why this happens (as it works in "normal bash") and I can not find a way to manage some sort of low-level try/catch pattern (for those familiar with high level programming languages).
This is my last attempt which is not working because it stops when conda fails:
foo-target:
# equivalent to bash: conda install --yes $requirement || pip install $requirement;
while read requirement; do \
conda install --yes $requirement ; \
[ $$? != 0 ] || pip install $requirement; \
done < requirements.txt
How do I make sure I try to install each requirement inside requirements.txt first with conda, when conda fails then with pip?
Why is my code not working? I see people pointing to the differences between sh and bash, but I am not able to isolate the issue.
Edit:
I ended up working around using the bash command inside the Makefile, but I find this solution not ideal, because I need to maintain yet another chunk of code in a one-line bash script (see below), is there a way to keep all the stuff inside a Makefile avoiding bash at all?
The Makefile target:
foo-target:
bash install-python-dependencies.sh
The bash one line script:
#!/usr/bin/env bash
while read requirement; do conda install --yes $requirement || pip install $requirement; done < requirements.txt
I can run the script directly from the command line (bash), I can also run it from within the Makefile, but I would like to get rid of the bash script and always execute make foo-target without using bash (avoiding bash even inside the Makefile).
As shown above, your makefile will work as you expect, other than that you have to escape the $ in shell variables like $$requirement.
I couldn't reproduce your problem with a simplified example to emulate the behavior:
foo-target:
for i in 1 2 3; do \
echo conda; \
test $$i -ne 2; \
[ $$? -eq 0 ] || echo pip; \
done
gives the expected output:
$ make
conda
conda
pip
conda
Have you added the .POSIX: target to your makefile, that you don't show here? If I do that then I get the behavior you claim to see:
conda
make: *** [Makefile:2: foo-target] Error 1
The reason for this is described in the manual for .POSIX:
In particular, if this target is mentioned then recipes will be invoked as if the shell had been passed the '-e' flag: the first failing command in a recipe will cause the recipe to fail immediately.
If you want to keep .POSIX mode but not get this error the simplest way is to use the method you show in your first example; I don't know why you stopped using it:
foo-target:
while read requirement; do \
conda install --yes $$requirement || pip install $$requirement; \
done < requirements.txt

function to detect cli installation

I have inline checking to detect the installation of cli packages to save time on installing existing package, but I found it is tedious and not that readable for those long list.
For example:
which -s redis-cli || brew install redis
which -s java || brew cask install java
which -s yarn || npm install -g yarn
Are there any function to make it nice looking? For example:
function npmInstall(name) {
if (which -s name) {
return;
}
npm install -g name;
}
Thanks a lot!
You may pass client packages as parameters.
Example, script.sh:
for cli in $#; do
which "$cli" || npm install -g "$cli"
done
invoked with ./script.sh java yarn
Update:
As package names may differs from executable names, you can handle these differences using a Bash associative array. Package name passed as parameter to the script will be used only if no value is found in the array for that package:
for pkg in $#; do
declare -A exe
exe=([redis]="redis-cli" [otherpkg]="otherpkg-cli")
package=${exe[$pkg]:-$pkg}
which "$package" || npm install -g "$package"
done

Resources