bash script unexpected end of file pterodactyl - bash

I am trying to run a custom egg through the Pterodactyl panel however, I get the error "/entrypoint.sh: line 30: syntax error: unexpected end of file"
My Docker image is as followed;
FROM ubuntu:18.04
MAINTAINER Amelia, <me#amelia.fun>
RUN apt-get update -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y dos2unix curl gnupg2 git-core zlib1g-dev libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev libffi-dev yarn build-essential gpg-agent zip unzip software-properties-common git default-jre python3-pip python-minimal python-pip ffmpeg libopus-dev libsodium-dev libpython2.7 libpython2.7-dev wget php7.2 php7.2-common php7.2-cli php7.2-fpm
RUN curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh
RUN bash nodesource_setup.sh
RUN apt-get install -y nodejs
RUN rm -rf nodesource_setup.sh
RUN adduser -D -h /home/container container
USER container
ENV USER=container HOME=/home/container
WORKDIR /home/container
COPY ./entrypoint.sh /entrypoint.sh
CMD ["/bin/bash", "/entrypoint.sh"]
and my entrypoint.sh is as followed;
#!/bin/bash
cd /home/container
MODIFIED_STARTUP=`eval echo $(echo ${STARTUP_PARAMETERS} | sed -e 's/{{/${/g' -e 's/}}/}/g')`
rm -rf *
git clone ${REPO_PARAMETERS}
cd */
if grep -q 'Java' AppType
then
${STARTUP_PARAMETERS}
if grep -q 'PHP' AppType
then
${STARTUP_PARAMETERS}
elif grep -q 'Python2' AppType
then
[ -f "requirements.txt" ] && pip2 install -r requirements.txt ${STARTUP_PARAMETERS} || ${STARTUP_PARAMETERS}
elif grep -q 'Python3' AppType
then
[ -f "requirements.txt" ] && pip3 install -r requirements.txt ${STARTUP_PARAMETERS} || ${STARTUP_PARAMETERS}
elif grep -q 'NodeJS' AppType
then
npm install
${STARTUP_PARAMETERS}
else
echo "Application not supported"
fi
echo "${MODIFIED_STARTUP}"
the Bash file is nowhere near 30 lines long so I'm not really sure.
The guide I used can also be found here

The immediate problem is that you have two if statements, but only one of them is closed with fi; it looks to me like the second one should be elif. But there are a number of other things that look like bad ideas to me:
cd commands in scripts should (almost) always have error tests -- for example, if cd /home/container fails for some reason, the rest of the script (including rm -rf *) will run in an unexpected location. Now, a self-destroying Docker environment may not be as big a deal as a self-destroying real system, but it's still not a good thing. I'd use something like this instead:
cd /home/container || {
echo "Error -- can't move to /home/container, something rotten in Denmark." >&2
exit 1
}
A similar comment applies to cd */.
The next line, that sets MODIFIED_STARTUP, is a mishmash of bad ideas. I'm not familiar with what's going to be in $STARTUP_PARAMETERS, but in general: Use $( ) instead of backticks (and not a weird mix of both). echo $(somecommand) is pretty much a no-op, just run the command directly. Also, variable references (and similar expansions like $( )) should almost always be in double-quotes (exception: on the right side of an assignment). And eval is generally dangerous, and should be avoided if possible. I you give me an example of what $STARTUP_PARAMETERS looks like, I could probably give a cleaned-up version of this.
The big if ... elif... etc has several conditions that do the same thing, e.g.
elif grep -q 'Python2' AppType
then
[ -f "requirements.txt" ] && pip2 install -r requirements.txt ${STARTUP_PARAMETERS} || ${STARTUP_PARAMETERS}
elif grep -q 'Python3' AppType
then
[ -f "requirements.txt" ] && pip3 install -r requirements.txt ${STARTUP_PARAMETERS} || ${STARTUP_PARAMETERS}
On the DRY principle (Don't Repeat Yourself), it'd be better to have a single test for all equivalent situations, like this:
elif grep -q 'Python2' AppType || grep -q 'Python3' AppType
then
[ -f "requirements.txt" ] && pip2 install -r requirements.txt ${STARTUP_PARAMETERS} || ${STARTUP_PARAMETERS}
or even:
elif grep -q 'Python[23]' AppType
then
[ -f "requirements.txt" ] && pip2 install -r requirements.txt ${STARTUP_PARAMETERS} || ${STARTUP_PARAMETERS}
BTW, the use of ${STARTUP_PARAMETERS} without quotes is setting off warning bells for me here, but may be inevitable -- again, I don't know its format. And the && ... || construction isn't always a safe replacement for if then else fi, since it can run both branches. In this script, if requirements.txt exists but the pip2 install command fails, it'll go ahead and run ${STARTUP_PARAMETERS} as well. Is that intentional? If not, I'd use a proper if statement instead.

Related

When running my bash script for setting ssh tunneling, it stops half

The following is my bash script setting up ssh tunneling. However, it always stops when it get to the echo part. does anyone know why? My distro is ubuntu 20.
apt update && apt install -y wget && DEBIAN_FRONTEND=noninteractive apt-get install
openssh-server -y &&
mkdir -p ~/.ssh && cd $_ &&
echo "ssh-ed25519
AAAAC3NzaC1lZDI1NTE5AAAAII2AOiMJXSWr/yYuAkSur/QSfdwBbmK3hs4qzlMvOQxT dmml#Dmms-MBP"
>> authorized_keys
&& service ssh start
thanks.
My response would be better placed in a comment, but I can't get the formatting right, so I'll post it here. The problem is likely due to a formatting issue. Splitting the string that's passed to the echo command over multiple lines is especially problematic. Try re-formatting as shown below, noting the backslash (\) at the end of each line. There's likely a better way to accomplish the goal than stringing a large number of commands together. Also, resist the temptation to use "set -e" here. See comments for additional details.
apt update && \
apt install -y wget && \
DEBIAN_FRONTEND=noninteractive apt-get installopenssh-server -y && \
mkdir -p ~/.ssh && \
cd $_ && \
echo "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII2AOiMJXSWr/yYuAkSur/QSfdwBbmK3hs4qzlMvOQxT dmml#Dmms-MBP" >> authorized_keys && \
service ssh start

Docker Build Image -- cant cd into directory and run commands

Docker Version: 17.09.1-ce
I am beginner in docker and I am trying to build docker image on centos. The below is the snippet of docker file i am having
FROM centos
RUN yum -y install samba-common && \
yum -y install gcc perl mingw-binutils-generic mingw-filesystem-base mingw32-binutils mingw32-cpp mingw32-crt mingw32-filesystem mingw32-gcc mingw32-headers mingw64-binutils mingw64-cpp mingw64-crt mingw64-filesystem mingw64-gcc mingw64-headers libcom_err-devel popt-devel zlib-devel zlib-static glibc-devel glibc-static python-devel && \
yum -y install git gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel && \
yum -y remove libbsd-devel && \
WORKDIR /usr/src && \
git clone git://xxxxxxxx/p/winexe/winexe-waf winexe-winexe-wafgit && \
WORKDIR /usr/src/samba && \
WORKDIR /usr/src/winexe-winexe-wafgit/source && \
head -n -3 wscript_build > tmp.txt && cp -f tmp.txt wscript_build && \
echo -e '\t'"stlib='smb_static bsd z resolv rt'", >> wscript_build && \
echo -e '\t'"lib='dl gnutls'", >> wscript_build && \
echo -e '\t'")" >> wscript_build && \
rm -rf tmp.txt && \
./waf --samba-dir=../../samba configure build
I tried with the normal cd which not work. WORKDIR does not work. How I can set working directory in Dockerfile?
I am getting an error like below using the above Dockerfile
/bin/sh: WORKDIR: command not found
The command '/bin/sh -c yum -y install samba-common && yum -y install gcc perl mingw-binutils-generic mingw-filesystem-base mingw32-binutils mingw32-cpp mingw32-crt mingw32-filesystem mingw32-gcc mingw32-headers mingw64-binutils mingw64-cpp mingw64-crt mingw64-filesystem mingw64-gcc mingw64-headers libcom_err-devel popt-devel zlib-devel zlib-static glibc-devel glibc-static python-devel && yum -y install git gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel && yum -y remove libbsd-devel && WORKDIR /usr/src && git clone git://xxxxxxxx/p/winexe/winexe-waf winexe-winexe-wafgit && WORKDIR /usr/src/samba && git reset --hard a6bda1f2bc85779feb9680bc74821da5ccd401c5 && WORKDIR /usr/src/winexe-winexe-wafgit/source && head -n -3 wscript_build > tmp.txt && cp -f tmp.txt wscript_build && echo -e '\t'"stlib='smb_static bsd z resolv rt'", >> wscript_build && echo -e '\t'"lib='dl gnutls'", >> wscript_build && echo -e '\t'")" >> wscript_build && rm -rf tmp.txt && ./waf --samba-dir=../../samba configure build' returned a non-zero code: 127
When I tried with normal cd instead work WORKDIR then I got below error
/bin/sh: line 0: cd: /usr/src/samba: No such file or directory but with sudo i can go into it. Then I tried to include sudo cd directory in docker file then it said no sudo found
UPDATE 1:
This is how I started build
sudo docker build -t abwinexeimage -f ./abwinexeimage . The build got successfully but unfortunately when i list images i dont see any image with tag namme of abwinexeimage.
I dont understand what is that first entry with tag name as none. what it represents ? it shows size of 1.23 GB, Do I really need this image or can i safely delete ?
When I started build first line showed that Sending build context to Docker daemon 303.9MB that means in that image list repository named centos with tag name latest is one the right image which I built ? I assuming so as the size says 202 MB ?
Then I issued docker ps, but no container running, then issued docker ps -a to see stopped containers
Then I tried to run image as container..
Now tried to issue docker ps to check whether container is running
Now i can tell you why i am so concerned about multiple containers present. Actually I wanted to manually CD into cd /usr/src/samba this inside docker container to verify if changes done via docker file got updated correctly or not. Now since i have multiple containers, really not sure which container I need to look into. In that stunt, i tried to start all containers, then manually issue
docker exec -it CONTAINER_NAME [bash | sh] to verify if i am able to find that file system there. This is the reason why I asked whether I can have single container so that i can easily find the file system there,my understanding is since multiple RUN statements created different layers, then its difficult for me to find in which container my file system resides, so that I can CD into it.. sorry for big explanation... I am trying to understand concepts better. Your comments please..
You need to use WORKDIR as a Dockerfile instruction, instead of using it together with run instruction.
RUN has 2 forms:
RUN (shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows) RUN
["executable", "param1", "param2"] (exec form)
WORKDIR
WORKDIR /path/to/workdir The WORKDIR instruction sets the working
directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that
follow it in the Dockerfile
FROM centos
RUN yum -y install samba-common && \
yum -y install gcc perl mingw-binutils-generic mingw-filesystem-base mingw32-binutils mingw32-cpp mingw32-crt mingw32-filesystem mingw32-gcc mingw32-headers mingw64-binutils mingw64-cpp mingw64-crt mingw64-filesystem mingw64-gcc mingw64-headers libcom_err-devel popt-devel zlib-devel zlib-static glibc-devel glibc-static python-devel && \
yum -y install git gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel && \
yum -y remove libbsd-devel
WORKDIR /usr/src
#use git clone with RUN not with WORKDIR
RUN git clone git://xxxxxxxx/p/winexe/winexe-waf winexe-winexe-wafgit
#So start it with new line
WORKDIR /usr/src/samba
WORKDIR /usr/src/winexe-winexe-wafgit/source
#start RUN with new line
RUN head -n -3 wscript_build > tmp.txt && cp -f tmp.txt wscript_build && \
echo -e '\t'"stlib='smb_static bsd z resolv rt'", >> wscript_build && \
echo -e '\t'"lib='dl gnutls'", >> wscript_build && \
echo -e '\t'")" >> wscript_build && \
rm -rf tmp.txt && \
./waf --samba-dir=../../samba configure build

What is the proper way to script a new nginx instance with SSL on a new Ubuntu 16.04 server?

I have this so far but I'm missing a couple of things like getting the cron job scripted. Don't want to do this as root. So I'm assuming some more could be done to set up the first user at the same time. The script would need to be idempotent (can be run over and over again without risking changing anything if it was run with the same arguments before).
singledomaincertnginx.sh:
#!/bin/bash
if [ -z "$3" ]; then
echo use is "singledomaincertnginx.sh <server-ssh-address> <ssl-admin-email> <ssl-domain>"
echo example: "singledomaincertnginx.sh user#mydomain.com admin#mydomain.com some-sub-domain.mydomain.com"
exit
fi
ssh $1 "cat > ~/wks" << 'EOF'
#!/bin/bash
echo email: $1
echo domain: $2
sudo add-apt-repository -y ppa:certbot/certbot
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install -y software-properties-common
sudo apt-get install -y python-certbot-nginx
sudo apt-get install -y nginx
sudo sed -i "s/server_name .*;/server_name $2;/" /etc/nginx/sites-available/default
sudo systemctl restart nginx.service
if [[ -e /etc/letsencrypt/live/$2/fullchain.pem ]]; then
sudo certbot -n --nginx --agree-tos -m "$1" -d "$2"
fi
if [[ ! sudo crontab -l | grep certbot ]]; then
# todo: add cron job to renew: 15 3 * * * /usr/bin/certbot renew --quiet
EOF
ssh $1 "chmod +x ~/wks"
ssh -t $1 "bash -x -e ~/wks $2 $3"
I have this so far but I'm missing a couple of things like getting the cron job scripted.
Here's one way to complete (and correct) what you started:
if ! sudo crontab -l | grep certbot; then
echo "15 3 * * * /usr/bin/certbot renew --quiet" | sudo tee -a /var/spool/cron/crontabs/root >/dev/null
fi
Here's another way I prefer because it doesn't need to know the path of the crontabs:
if ! sudo crontab -l | grep certbot; then
sudo crontab -l | { cat; echo "15 3 * * * /usr/bin/certbot renew --quiet"; } | sudo crontab -
fi
Something I see missing is how the certificate file /etc/letsencrypt/live/$domain/fullchain.pem gets created.
Do you provide that by other means,
or do you need help with that part?
Don't want to do this as root.
Most of the steps involve running apt-get,
and for that you already require root.
Perhaps you meant that you don't want to do the renewals using root.
Some services operate as a dedicated user instead of root,
but looking through the documentation of certbot I haven't seen anything like that.
So it seems a common practice to do the renewals with root,
so adding the renewal command to root's crontab seems fine to me.
I would improve a couple of things in the script to make it more robust:
The positional parameters $1, $2 and so on scattered around are easy to lose track of, which could lead to errors. I would give them proper names.
The command line argument validation if [ -z "$3" ] is weak, I would make that more strict as if [ $# != 3 ].
Once the remote script is generated, you call it with bash -e, which is good for safeguarding. But if the script is called by something else without -e, the safeguard won't be there. It would be better to build that safeguard into the script itself with set -e. I would go further and use set -euo pipefail which is even more strict. And I would put that in the outer script too.
Most of the commands in the remote script require sudo. For one thing that's tedious to write. For another, if one command ends up taking a long time such that the sudo session expires, you may have to reenter the root password a second time, which will be annoying, especially if you stepped out for a coffee break. It would be better to require to always run as root, by adding a check on the uid of the executing user.
Since you run the remote script with bash -x ~/wks ... instead of just ~/wks, there's no need to make it executable with chmod, so that step can be dropped.
Putting the above together (and then some), I would write like this:
#!/bin/bash
set -euo pipefail
if [ $# != 3 ]; then
echo "Usage: $0 <server-ssh-address> <ssl-admin-email> <ssl-domain>"
echo "Example: singledomaincertnginx.sh user#mydomain.com admin#mydomain.com some-sub-domain.mydomain.com"
exit 1
fi
remote=$1
email=$2
domain=$3
remote_script_path=./wks
ssh $remote "cat > $remote_script_path" << 'EOF'
#!/bin/bash
set -euo pipefail
if [[ "$(id -u)" != 0 ]]; then
echo "This script must be run as root. (sudo $0)"
exit 1
fi
email=$1
domain=$2
echo email: $email
echo domain: $domain
add-apt-repository -y ppa:certbot/certbot
apt-get update
apt-get upgrade -y
apt-get install -y software-properties-common
apt-get install -y python-certbot-nginx
apt-get install -y nginx
sed -i "s/server_name .*;/server_name $domain;/" /etc/nginx/sites-available/default
systemctl restart nginx.service
#service nginx restart
if [[ -e /etc/letsencrypt/live/$domain/fullchain.pem ]]; then
certbot -n --nginx --agree-tos -m $email -d $domain
fi
if ! crontab -l | grep -q certbot; then
crontab -l | {
cat
echo
echo "15 3 * * * /usr/bin/certbot renew --quiet"
echo
} | crontab -
fi
EOF
ssh -t $remote "sudo bash -x $remote_script_path $email $domain"
Are you looking for something like this:
if [[ "$(grep '/usr/bin/certbot' /var/spool/cron/crontabs/$(whoami))" = "" ]]
then
echo "15 3 * * * /usr/bin/certbot renew --quiet" >> /var/spool/cron/crontabs/$(whoami)
fi
and the fi at the end
you can also avoid doing that much sudo by concatenating them like in:
sudo bash -c 'add-apt-repository -y ppa:certbot/certbot;apt-get update;apt-get upgrade -y;apt-get install -y software-properties-common python-certbot-nginx nginx;sed -i "s/server_name .*;/server_name $2;/" /etc/nginx/sites-available/default;systemctl restart nginx.service'
If you are doing this with sudo you are doing this as root
this is a simple thing to do in ansible, best do it there
to do the cron job do this:
CRON_FILE="/etc/cron.d/certbot"
if [ ! -f $CRON_FILE ] ; then
echo '15 3 * * * /usr/bin/certbot renew --quiet' > $CRON_FILE
fi
There are multiple ways to do this and they could be considered "proper" depending on the scenario.
One way to do it on boot time could be using cloud-init, For testing in the case of using AWS when creating the instance you could add your custom script:
This will allow running commands on launch of your instance, In case you would like to automate this process (infrastructure like code) you could use for example terraform
If for some reason you already have the instance up and running and just want to update on demand but not using ssh, you could use saltstack.
Talking about "Idempotency" Ansible could be also a very good tool for doing this, from the ansible glossary:
An operation is idempotent if the result of performing it once is exactly the same as the result of performing it repeatedly without any intervening actions.
There are many tools that can help you achieve this, only thing is to find the tool that adapts better to your needs/scenario.
Copy-paste solution for nginx + Ubuntu
Install dependencies
sudo apt-get install nginx -y
sudo apt-get install software-properties-common -y
sudo add-apt-repository universe -y
sudo add-apt-repository ppa:certbot/certbot -y
sudo apt-get update
sudo apt-get install certbot python-certbot-nginx -y
Get SSL certificate and redirect all traffic from http to https
certbot --nginx --agree-tos --redirect --noninteractive \
--email YOUR#EMAIL.COM \
--domain YOUR.DOMAIN.COM
Test renewal
certbot renew --dry-run
Docs
https://certbot.eff.org/lets-encrypt/ubuntuxenial-nginx

Is the Syntax for the Bash Script right for an if elif statement

I added another condition to my if, elseif condition for my bash shell script. I am new to shell scripting, can you guys review my code if my syntax for if conditions are right especially the "fi" implementation. Much appreciated.
if [ -f /etc/system-release ] && grep Amazon /etc/system-release > /dev/null; then
cd /tmp
sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
else
# we're either RedHat or Ubuntu
DISTRIBUTOR=`lsb_release -is`
DISTRIBUTOR2=`lsb_release -cs`
if [ "trusty" == $DISTRIBUTOR2 ]; then
cd /tmp
wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb
sudo dpkg -i amazon-ssm-agent.deb
sudo start amazon-ssm-agent
elif [ "RedHatEnterpriseServer" == $DISTRIBUTOR ]; then
cd /tmp
sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
elif [ "xenial" == $DISTRIBUTOR2 ]; then
cd /tmp
wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb
sudo dpkg -i amazon-ssm-agent.deb
sudo systemctl enable amazon-ssm-agent
fi
fi
sleep 10
Looks basically ok, but https://www.shellcheck.net/ has a couple of comments that you should probably address.

How can I check if a package is installed and install it if not?

I'm working on a Ubuntu system and currently this is what I'm doing:
if ! which command > /dev/null; then
echo -e "Command not found! Install? (y/n) \c"
read
if "$REPLY" = "y"; then
sudo apt-get install command
fi
fi
Is this what most people would do? Or is there a more elegant solution?
To check if packagename was installed, type:
dpkg -s <packagename>
You can also use dpkg-query that has a neater output for your purpose, and accepts wild cards, too.
dpkg-query -l <packagename>
To find what package owns the command, try:
dpkg -S `which <command>`
For further details, see article Find out if package is installed in Linux and dpkg cheat sheet.
To be a little more explicit, here's a bit of Bash script that checks for a package and installs it if required. Of course, you can do other things upon finding that the package is missing, such as simply exiting with an error code.
REQUIRED_PKG="some-package"
PKG_OK=$(dpkg-query -W --showformat='${Status}\n' $REQUIRED_PKG|grep "install ok installed")
echo Checking for $REQUIRED_PKG: $PKG_OK
if [ "" = "$PKG_OK" ]; then
echo "No $REQUIRED_PKG. Setting up $REQUIRED_PKG."
sudo apt-get --yes install $REQUIRED_PKG
fi
If the script runs within a GUI (e.g., it is a Nautilus script), you'll probably want to replace the 'sudo' invocation with a 'gksudo' one.
This one-liner returns 1 (installed) or 0 (not installed) for the 'nano' package...
$(dpkg-query -W -f='${Status}' nano 2>/dev/null | grep -c "ok installed")
even if the package does not exist or is not available.
The example below installs the 'nano' package if it is not installed...
if [ $(dpkg-query -W -f='${Status}' nano 2>/dev/null | grep -c "ok installed") -eq 0 ];
then
apt-get install nano;
fi
dpkg-query --showformat='${db:Status-Status}'
This produces a small output string which is unlikely to change and is easy to compare deterministically without grep:
pkg=hello
status="$(dpkg-query -W --showformat='${db:Status-Status}' "$pkg" 2>&1)"
if [ ! $? = 0 ] || [ ! "$status" = installed ]; then
sudo apt install $pkg
fi
The $? = 0 check is needed because if you've never installed a package before, and after you remove certain packages such as hello, dpkg-query exits with status 1 and outputs to stderr:
dpkg-query: no packages found matching hello
instead of outputting not-installed. The 2>&1 captures that error message too when it comes preventing it from going to the terminal.
For multiple packages:
pkgs='hello certbot'
install=false
for pkg in $pkgs; do
status="$(dpkg-query -W --showformat='${db:Status-Status}' "$pkg" 2>&1)"
if [ ! $? = 0 ] || [ ! "$status" = installed ]; then
install=true
break
fi
done
if "$install"; then
sudo apt install $pkgs
fi
The possible statuses are documented in man dpkg-query as:
n = Not-installed
c = Config-files
H = Half-installed
U = Unpacked
F = Half-configured
W = Triggers-awaiting
t = Triggers-pending
i = Installed
The single letter versions are obtainable with db:Status-Abbrev, but they come together with the action and error status, so you get 3 characters and would need to cut it.
So I think it is reliable enough to rely on the uncapitalized statuses (Config-files vs config-files) not changing instead.
dpkg -s exit status
This unfortunately doesn't do what most users want:
pkgs='qemu-user pandoc'
if ! dpkg -s $pkgs >/dev/null 2>&1; then
sudo apt-get install $pkgs
fi
because for some packages, e.g. certbot, doing:
sudo apt install certbot
sudo apt remove certbot
leaves certbot in state config-files, which means that config files were left in the machine. And in that state, dpkg -s still returns 0, because the package metadata is still kept around so that those config files can be handled more nicely.
To actually make dpkg -s return 1 as desired, --purge would be needed:
sudo apt remove --purge certbot
which actually moves it into not-installed/dpkg-query: no packages found matching.
Note that only certain packages leave config files behind. A simpler package like hello goes directly from installed to not-installed without --purge.
Tested on Ubuntu 20.10.
Python apt package
There is a pre-installed Python 3 package called apt in Ubuntu 18.04 which exposes an Python apt interface!
A script that checks if a package is installed and installs it if not can be seen at: How to install a package using the python-apt API
Here is a copy for reference:
#!/usr/bin/env python
# aptinstall.py
import apt
import sys
pkg_name = "libjs-yui-doc"
cache = apt.cache.Cache()
cache.update()
cache.open()
pkg = cache[pkg_name]
if pkg.is_installed:
print "{pkg_name} already installed".format(pkg_name=pkg_name)
else:
pkg.mark_install()
try:
cache.commit()
except Exception, arg:
print >> sys.stderr, "Sorry, package installation failed [{err}]".format(err=str(arg))
Check if an executable is in PATH instead
See: How can I check if a program exists from a Bash script?
See also
https://askubuntu.com/questions/165951/dpkg-get-selections-shows-packages-marked-deinstall
https://askubuntu.com/questions/423355/how-do-i-check-if-a-package-is-installed-on-my-server
Ubuntu added its "Personal Package Archive" (PPA), and PPA packages have a different result.
A native Debian repository package is not installed:
~$ dpkg-query -l apache-perl
~$ echo $?
1
A PPA package registered on the host and installed:
~$ dpkg-query -l libreoffice
~$ echo $?
0
A PPA package registered on the host, but not installed:
~$ dpkg-query -l domy-ce
~$ echo $?
0
~$ sudo apt-get remove domy-ce
[sudo] password for user:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package domy-ce is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Also posted on: Test if a package is installed in APT
UpAndAdam wrote:
However you can't simply rely on return codes here for scripting
In my experience you can rely on dkpg's exit codes.
The return code of dpkg -s is 0 if the package is installed and 1 if it's not, so the simplest solution I found was:
dpkg -s <pkg-name> 2>/dev/null >/dev/null || sudo apt-get -y install <pkg-name>
It works fine for me...
I've settled on one based on Nultyi's answer:
MISSING=$(dpkg --get-selections $PACKAGES 2>&1 | grep -v 'install$' | awk '{ print $6 }')
# Optional check here to skip bothering with apt-get if $MISSING is empty
sudo apt-get install $MISSING
Basically, the error message from dpkg --get-selections is far easier to parse than most of the others, because it doesn't include statuses like "deinstall". It also can check multiple packages simultaneously, something you can't do with just error codes.
Explanation/example:
$ dpkg --get-selections python3-venv python3-dev screen build-essential jq
dpkg: no packages found matching python3-venv
dpkg: no packages found matching python3-dev
screen install
build-essential install
dpkg: no packages found matching jq
So grep removes installed packages from the list, and awk pulls the package names out from the error message, resulting in MISSING='python3-venv python3-dev jq', which can be trivially inserted into an install command.
I'm not blindly issuing an apt-get install $PACKAGES, because as mentioned in the comments, this can unexpectedly upgrade packages you weren't planning on; not really a good idea for automated processes that are expected to be stable.
This seems to work pretty well.
$ sudo dpkg-query -l | grep <some_package_name> | wc -l
It either returns 0 if not installed or some number > 0 if installed.
It seems that nowadays apt-get has an option --no-upgrade that just does what the OP wants:
--no-upgrade Do not upgrade packages. When used in conjunction with install, no-upgrade will prevent packages listed from being upgraded if they are already installed.
Manpage from https://linux.die.net/man/8/apt-get
Therefore you can use
apt-get install --no-upgrade package
and package will be installed only if it's not.
Inspired by Chris' answer:
#! /bin/bash
installed() {
return $(dpkg-query -W -f '${Status}\n' "${1}" 2>&1|awk '/ok installed/{print 0;exit}{print 1}')
}
pkgs=(libgl1-mesa-dev xorg-dev vulkan-tools libvulkan-dev vulkan-validationlayers-dev spirv-tools)
missing_pkgs=""
for pkg in ${pkgs[#]}; do
if ! $(installed $pkg) ; then
missing_pkgs+=" $pkg"
fi
done
if [ ! -z "$missing_pkgs" ]; then
cmd="sudo apt install -y $missing_pkgs"
echo $cmd
fi
This will do it. apt-get install is idempotent.
sudo apt-get install --no-upgrade command
I've found all solutions in previous answers can produce a false positive if a package is installed and then removed, yet the installation package remains on the system.
To replicate:
Install package apt-get install curl
Remove package apt-get remove curl
Now test the previous answers.
The following command seems to solve this condition:
dpkg-query -W -f='${Status}\n' curl | head -n1 | awk '{print $3;}' | grep -q '^installed$'
This will result in a definitive installed or not-installed.
$name="rsync"
[ `which $name` ] $$ echo "$name : installed" || sudo apt-get install -y $name
Use:
apt-cache policy <package_name>
If it is not installed, it will show:
Installed: none
Otherwise it will show:
Installed: version
This feature already exists in Ubuntu and Debian, in the command-not-found package.
This explicitly prints 0 if installed else 1 using only awk:
dpkg-query -W -f '${Status}\n' 'PKG' 2>&1|awk '/ok installed/{print 0;exit}{print 1}'
or if you prefer the other way around where 1 means installed and 0 otherwise:
dpkg-query -W -f '${Status}\n' 'PKG' 2>&1|awk '/ok installed/{print 1;exit}{print 0}'
** replace PKG with your package name
Convenience function:
installed() {
return $(dpkg-query -W -f '${Status}\n' "${1}" 2>&1|awk '/ok installed/{print 0;exit}{print 1}')
}
# usage:
installed gcc && echo Yes || echo No
#or
if installed gcc; then
echo yes
else
echo no
fi
For Ubuntu, apt provides a fairly decent way to do this. Below is an example for Google Chrome:
apt -qq list google-chrome-stable 2>/dev/null | grep -qE "(installed|upgradeable)" || apt-get install google-chrome-stable
I'm redirecting error output to null, because apt warns against using its "unstable cli". I suspect list package is stable, so I think it's ok to throw this warning away. The -qq makes apt super quiet.
which <command>
if [ $? == 1 ]; then
<pkg-manager> -y install <command>
fi
This command is the most memorable:
dpkg --get-selections <package-name>
If it's installed it prints:
<package-name> install
Otherwise it prints
No packages found matching <package-name>.
This was tested on Ubuntu 12.04.1 (Precise Pangolin).
apt list [packagename]
seems to be the simplest way to do it outside of dpkg and older apt-* tools.
I had a similar requirement when running test locally instead of in Docker. Basically I only wanted to install any .deb files found if they weren't already installed.
# If there are .deb files in the folder, then install them
if [ `ls -1 *.deb 2> /dev/null | wc -l` -gt 0 ]; then
for file in *.deb; do
# Only install if not already installed (non-zero exit code)
dpkg -I ${file} | grep Package: | sed -r 's/ Package:\s+(.*)/\1/g' | xargs dpkg -s
if [ $? != 0 ]; then
dpkg -i ${file}
fi;
done;
else
err "No .deb files found in '$PWD'"
fi
I guess the only problem I can see is that it doesn't check the version number of the package so if .deb file is a newer version. Then this wouldn't overwrite the currently installed package.
Kinda based off yours just a little more 'elegant'. Just because I'm bored.
#!/bin/bash
FOUND=("\033[38;5;10m")
NOTFOUND=("\033[38;5;9m")
PKG="${#:1}"
command ${PKG} &>/dev/null
if [[ $? != 0 ]]; then
echo -e "${NOTFOUND}[!] ${PKG} not found [!]"
echo -e "${NOTFOUND}[!] Would you like to install ${PKG} now ? [!]"
read -p "[Y/N] >$ " ANSWER
if [[ ${ANSWER} == [yY] || ${ANSWER} == [yY][eE][sS] ]]; then
if grep -q "bian" /etc/os-release; then
sudo apt-get install ${PKG}
elif grep -q "arch" /etc/os-release; then
if [[ -f /bin/yay ]] || [[ -f /bin/yaourt ]]; then
yaourt -S ${PKG} 2>./err || yay -S ${PKG} 2>./err
else
sudo pacman -S ${PKG}
fi
elif grep -q "fedora" /etc/os-release; then
sudo dnf install ${PKG}
else
echo -e "${NOTFOUND}[!] This script couldn't detect your package manager [!]"
echo -e "${NOTFOUND}[!] Manually install it [!]"
fi
elif [[ ${ANSWER} == [nN] || ${ANSWER} == [nN][oO] ]]; then
echo -e "${NOTFOUND}[!] Exiting [!]"
fi
else
echo -e "${FOUND}[+] ${PKG} found [+]"
fi
The answers that suggest to use something along the lines of:
dpkg-query --showformat '${db:Status-Status}\n' --show $package | grep -q '^installed$'
dpkg-query --showformat '${Status}\n' --show $package | grep -q '^install ok installed$'
are correct.
But if you have the package dpkg-dev installed and you do not just want to check whether a package is installed but you also:
want to know whether a package is installed in a certain version
want to have a package in a certain architecture
want to see if a virtual package is provided
then you can abuse the dpkg-checkbuilddeps tool for this job:
dpkg-checkbuilddeps -d apt /dev/null
This will check whether apt is installed.
The following will check whether apt is installed in at least version 2.3.15 and grep is installed as amd64 and the virtual package x-window-manager is provided by some of the installed packages:
dpkg-checkbuilddeps -d 'apt (>= 2.3.15), grep:amd64, x-window-manager' /dev/null
The exit status of dpkg-checkbuilddeps will tell the script whether the dependencies are satisfied or not. Since this method supports passing multiple packages, you only have to run dpkg-checkbuilddeps once even if you want to check whether more than one package is installed.
Since you mentioned Ubuntu, and you want to do this programmatically(although dpkg variations can also be used but would be more complex to implement), this(which) will definitely work:
#!/bin/bash
pkgname=mutt
which $pkgname > /dev/null;isPackage=$?
if [ $isPackage != 0 ];then
echo "$pkgname not installed"
sleep 1
read -r -p "${1:-$pkgname will be installed. Are you sure? [y/N]} " response
case "$response" in
[yY][eE][sS]|[yY])
sudo apt-get install $pkgname
;;
*)
false
;;
esac
else
echo "$pkgname is installed"
sleep 1
fi
Although for POSIX compatibility, you would want to use command -v instead as mentioned in another similar question.
In that case,
which $pkgname > /dev/null should be replaced by command -v $pkgname in the above code sample.
If your package has a command line interface you can check if the package exists before installing it by evaluating the output from calling it's command line tool.
Here's an example with a package called helm.
#!/bin/bash
# Call the command for the package silently
helm > /dev/null
# Get the exit code of the last command
command_exit_code="$(echo $?)"
# Run installation if exit code is not equal to 0
if [ "$command_exit_code" -ne "0" ]; then
# Package does not exist: Do the package installation
else
echo "Skipping 'helm' installation: Package already exists"
fi;
I use the following way:
which mySQL 2>&1|tee 1> /dev/null
if [[ "$?" == 0 ]]; then
echo -e "\e[42m MySQL already installed. Moving on...\e[0m"
else
sudo apt-get install -y mysql-server
if [[ "$?" == 0 ]]; then
echo -e "\e[42mMy SQL installed\e[0m"
else
echo -e "\e[42Installation failed\e[0m"
fi
fi
I use this solution as I find it most straightforward.
function must_install(){
return "$(apt -qq list $var --installed 2> /dev/null |wc -l)"
}
function install_if() {
unset install
for var in "$#"
do
if $(must_install $var)
then
install+="${var} "
fi
done
if [ -n "$install" ];
then
sudo apt-get install -qy $install
fi
}
The neat thing is, must_install returns 1 or 0 which is then interpreted as true or false from the calling if, so we don't need any test using [].
install_if takes any number of packages separated by space.
The problem is apt is not meant to be used in scripts, so this might stop working at any time. 8)
All the answers are good but seem to be complex for beginners like myself to understand. so here is the solution that worked for me. My Linux environment is centOS but can't confirm it works for all distributions
PACKAGE_NAME=${PACKAGE_NAME:-node}
if ! command -v $PACKAGE_NAME > /dev/null; then
echo "Installing $PACKAGE_NAME ..."
else
echo "$PACKAGE_NAME already installed"
fi

Resources