I have inline checking to detect the installation of cli packages to save time on installing existing package, but I found it is tedious and not that readable for those long list.
For example:
which -s redis-cli || brew install redis
which -s java || brew cask install java
which -s yarn || npm install -g yarn
Are there any function to make it nice looking? For example:
function npmInstall(name) {
if (which -s name) {
return;
}
npm install -g name;
}
Thanks a lot!
You may pass client packages as parameters.
Example, script.sh:
for cli in $#; do
which "$cli" || npm install -g "$cli"
done
invoked with ./script.sh java yarn
Update:
As package names may differs from executable names, you can handle these differences using a Bash associative array. Package name passed as parameter to the script will be used only if no value is found in the array for that package:
for pkg in $#; do
declare -A exe
exe=([redis]="redis-cli" [otherpkg]="otherpkg-cli")
package=${exe[$pkg]:-$pkg}
which "$package" || npm install -g "$package"
done
Related
So I've just created my very first docker image (woohoo) and was able to run it on the original host system where it was created (Ubuntu 20.04 Desktop PC). The image was executed using docker run -it <image_id>. The expected command (defined in CMD which is just a bash script) was run, and the expected output was seen. I assumed this meant I successfully created my very first docker image and so I pushed this to Docker Hub.
Docker Hub
GitHub repo with original docker-compose.yml and Dockerfile
Here's the Dockerfile:
FROM ubuntu:20.04
# Required for Debian interaction
# (https://stackoverflow.com/questions/62299928/r-installation-in-docker-gets-stuck-in-geographic-area)
ENV DEBIAN_FRONTEND noninteractive
WORKDIR /home/benchmarking-programming-languages
# Install pre-requisites
# Versions at time of writing:
# gcc -- version (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
# make -- GNU Make 4.2.1
# curl -- 7.68.0
RUN apt update && apt install make build-essential curl wget tar -y
# Install `column`
RUN wget https://mirrors.edge.kernel.org/pub/linux/utils/util-linux/v2.35/util-linux-2.35-rc1.tar.gz
RUN tar xfz util-linux-2.35-rc1.tar.gz
WORKDIR /home/benchmarking-programming-languages/util-linux-2.35-rc1
RUN ./configure
RUN make column
RUN cp .libs/column /bin/
WORKDIR /home/benchmarking-programming-languages
RUN rm -rf util-linux-2.35-rc1*
RUN apt install python3 pip -y
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN apt install default-jdk-headless -y
RUN apt install rustc -y
# Install GoLang
RUN wget https://go.dev/dl/go1.17.8.linux-amd64.tar.gz
RUN rm -rf /usr/local/go && tar -C /usr/local -xzf go1.17.8.linux-amd64.tar.gz
ENV PATH="/usr/local/go/bin:${PATH}"
# Install Haxe and Haxelib
RUN wget https://github.com/HaxeFoundation/haxe/releases/download/4.2.5/haxe-4.2.5-linux64.tar.gz
RUN tar xfz haxe-4.2.5-linux64.tar.gz
RUN ln -s /home/benchmarking-programming-languages/haxe_20220306074705_e5eec31/haxe /usr/bin/haxe
RUN ln -s /home/benchmarking-programming-languages/haxe_20220306074705_e5eec31/haxelib /usr/bin/haxelib
# # Install Neko (Haxe VM)
# RUN add-apt-repository ppa:haxe/snapshots -y
# RUN apt update
# RUN apt install neko -y
RUN if ! test -d /home/benchmarking-programming-languages; then mkdir /home/benchmarking-programming-languages && echo "Created directory /home/benchmarking-programming-languages."; fi
COPY . /home/benchmarking-programming-languages
RUN pip install -r /home/benchmarking-programming-languages/requirements_dev.txt
CMD [ "/home/benchmarking-programming-languages/benchmark.sh -v" ]
However, upon pulling the same image on my Windows 10 machine (same machine as above just dual booted) and a Windows 11 laptop using both the Docker Desktop application and the command line (docker pull mariosyian/benchmarking-programming-languages followed by docker run -it <image_id>). Both which give me the following error
Error invoking remote method 'docker-run-container': Error: (HTTP code 400) unexpected - failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/home/benchmarking-programming-languages/benchmark.sh -v": stat /home/benchmarking-programming-languages/benchmark.sh -v: no such file or directory: unknown
Despite this, running the image as a container with a shell (docker run -it <image_id> sh), I am successfully able to, not only see the file, but execute it with no errors! Can someone suggest a reason for why the error happens in the first place, and how to fix it?
In your Dockerfile you have specified the CMD as
CMD [ "/home/benchmarking-programming-languages/benchmark.sh -v" ]
This uses the JSON syntax of the CMD instruction, i.e. is an array of strings where the first string is the executable and each following string is a parameter to that executable.
Since you only have a single string specified docker tries to invoke the executable /home/benchmarking-programming-languages/benchmark.sh -v - i.e. a file named "benchmark.sh -v", containing a space in its name and ending with -v. But what you actually intended to do was to invoke the benchmark.sh script with the -v parameter.
You can do this by correctly specifying the parameter(s) as separate strings:
CMD ["/home/benchmarking-programming-languages/benchmark.sh", "-v"]
or by using the shell syntax:
CMD /home/benchmarking-programming-languages/benchmark.sh -v
Take the example of pip
We can do
1) Assume command is there and run pip install somepackage. Fail if it gives an exit 1
pip install somepackage || exit 1
2) Attempt to install pip
wget <path online to pip>; pip install somepackage
3) Check pip exists
pip --version || wget <path online to pip> && pip install somepackage
Is there a better way than either of these to check for the existance with the least resource usage
Your Python script doesn't have code like
try:
import requests
except ImportError:
import subprocess
subprocess.call(["pip", "install", "requests"])
Instead, you have an installer that ensures that requests has been installed
before you run your script.
The same logic applies to your shell script. It isn't your script's job to install pip if it's missing; whoever runs the script should ensure that pip is installed before you run the script. If you do anything, it should simply be to note that pip wasn't found.
if ! command -v pip > /dev/null; then
printf 'pip not found; check your PATH or install pip before continuing\n' >&2
exit 1
fi
pip install some package
if ! type pip;
then
wget ...
pip install whatever
fi
type is a shell builtin that returns true if the command can be found and false otherwise.
I need to run a while loop to install Python dependencies. In the Python world recently there are 2 ways to install dependencies which have become established:
using conda (for some people this is the "robust/stable/desired way", provided by a "Python distribution" called Anaconda/Miniconda),
using pip (in the last few years included as the official way of Python itself).
The "pseudocode" should be:
try to install the dependency with the conda command
if it fails then install it with the pip command
In the Python world dependencies are specified in a requirements.txt file, usually exact versions (==) as one dependency per line with the pattern <MY_DEPENDENCY>==<MY_VERSION>.
The equivalent bash desired command is: while read requirement; do conda install --yes $requirement || pip install $requirement; done < requirements.txt, however this does not work in the GNU make/Makefile world for reasons that I don't completely get.
I've tried a few different flavors of that while loop - all unsuccessful. Basically once the conda command fails I am not able to go on with the pip attempt. I am not sure why this happens (as it works in "normal bash") and I can not find a way to manage some sort of low-level try/catch pattern (for those familiar with high level programming languages).
This is my last attempt which is not working because it stops when conda fails:
foo-target:
# equivalent to bash: conda install --yes $requirement || pip install $requirement;
while read requirement; do \
conda install --yes $requirement ; \
[ $$? != 0 ] || pip install $requirement; \
done < requirements.txt
How do I make sure I try to install each requirement inside requirements.txt first with conda, when conda fails then with pip?
Why is my code not working? I see people pointing to the differences between sh and bash, but I am not able to isolate the issue.
Edit:
I ended up working around using the bash command inside the Makefile, but I find this solution not ideal, because I need to maintain yet another chunk of code in a one-line bash script (see below), is there a way to keep all the stuff inside a Makefile avoiding bash at all?
The Makefile target:
foo-target:
bash install-python-dependencies.sh
The bash one line script:
#!/usr/bin/env bash
while read requirement; do conda install --yes $requirement || pip install $requirement; done < requirements.txt
I can run the script directly from the command line (bash), I can also run it from within the Makefile, but I would like to get rid of the bash script and always execute make foo-target without using bash (avoiding bash even inside the Makefile).
As shown above, your makefile will work as you expect, other than that you have to escape the $ in shell variables like $$requirement.
I couldn't reproduce your problem with a simplified example to emulate the behavior:
foo-target:
for i in 1 2 3; do \
echo conda; \
test $$i -ne 2; \
[ $$? -eq 0 ] || echo pip; \
done
gives the expected output:
$ make
conda
conda
pip
conda
Have you added the .POSIX: target to your makefile, that you don't show here? If I do that then I get the behavior you claim to see:
conda
make: *** [Makefile:2: foo-target] Error 1
The reason for this is described in the manual for .POSIX:
In particular, if this target is mentioned then recipes will be invoked as if the shell had been passed the '-e' flag: the first failing command in a recipe will cause the recipe to fail immediately.
If you want to keep .POSIX mode but not get this error the simplest way is to use the method you show in your first example; I don't know why you stopped using it:
foo-target:
while read requirement; do \
conda install --yes $$requirement || pip install $$requirement; \
done < requirements.txt
I have a Dockerfile like below :
FROM node:latest
RUN npm install something && \
npm install something && \
npm install something
I want to pass 'yes' response for all required 'yes/no' param when npm installing.
Is there any way to do this?
I used the following to install Angular without usage statistics sharing.
RUN echo n | npm install -g --silent #angular/cli
I think echo y should work for you
There's the yes command specifically for this in linux:
RUN yes | npm install something && \
npm install something && \
yes yes | npm install something
The first line outputs a list of "y" to the first npm install command. The yes command also takes an option of what you want it to output. So if you need to output "yes" instead of a single "y" character per line, then you can run yes yes as seen in the third example.
I'm making a simple install.zsh to put in my dotfiles. It's mostly used to install stuff like ruby-gems, npm, pip and so on, and then I install the rest using those package managers.
But in order to get to that level, I still need to install those package managers using the correct platform-dependent syntax. Not to mention all the stuff that is only available in the platform-dependent package manager.
99% of this is solved using a simple function like this:
install(){
command -v brew && echo "installing $1 using Homebrew" && brew install "$1"
command -v pkg && echo "installing $1 using pkg" && sudo pkg install "$1"
command -v apt-get && echo "installing $1 using apt" && sudo apt-get install "$1"
}
1 % of the time this won't work due to brew, pkg and apt-get expecting different packagenames. For example, pkg wants dev/ruby and dev/ruby-gems; apt wants ruby-full, and brew just wants ruby.
So I need a subfunction which replaces $1 with the platform-correct package name WITHOUT a huge switch tree consisting of smaller switch trees! I can already do that, and not only do I not want to write it, but I don't want to maintain it when I add new packages... I'd rather have something like a plaintext "database" consisting of rows of four fields like this:
'ruby','ruby-full','dev/ruby,dev/ruby-gems',ruby
Or something with better syntax, it's not very important. The subfunction is more important.
Of course, if I'm trying to reinvent the wheel here, if someone can point me to a wheelwright that would be even better ;)
Since 99% of this is solved using a simple function, you probably don't want to create "database" entries for the 99 out of 100 cases where it isn't needed. For the 1 % exceptions, you could e. g. create arrays named pkgs_name (with name being the generic name of the "stuff"), which contain three elements, the different packagenames for [1] apt, [2] pkg and [3] brew.
pkgs_ruby=(ruby-full 'dev/ruby dev/ruby-gems' ruby)
Then a function, let's call it pkgs(), passed the package manager's index and the stuff's name, could check whether the array pkgs_name exists and if so return the packagename(s) for the index, otherwise just the generic name, like:
pkgs()
{
eval echo \${pkgs_$2\[$1]-$2}
}
You'd then just have to modify the install() function to take what pkgs() returns instead of just "$1":
install()
{
whence brew && echo "installing $1 using Homebrew" && brew install `pkgs 3 $1`
whence pkg && echo "installing $1 using pkg" && sudo pkg install `pkgs 2 $1`
whence apt-get && echo "installing $1 using apt" && sudo apt-get install `pkgs 1 $1`
}
So, install ruby would take the information from $pkgs_ruby and execute e. g.
sudo apt-get install ruby-full
while install java without any definitions would execute e. g.
sudo apt-get install java