Bash script fails when run via script - bash

These commands, when run as a script, fails with error:
/etc/nginx/.htpasswd: No such file or directory
sudo touch /etc/nginx/.htpasswd
hash="$(echo -n "$MD5Password" | md5sum )"
echo "${ApplicationUserName}:$hash" >> /etc/nginx/.htpasswd
However, when I execute them one at a time manually they work just fine.
Complete code:
#!/bin/bash -x
yum -y update
yum install -y aws-cfn-bootstrap
yum install httpd-tools -y
echo
/opt/aws/bin/cfn-init --verbose --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}
sudo touch /etc/nginx/.htpasswd
hash="$(echo -n "$MD5Password" | md5sum )"
echo "${ApplicationUserName}:$hash" >> /etc/nginx/.htpasswd
This is part of user data I am passing in an AWS Cloudformation template.
What am I missing here?

The error message occurs because the /etc/nginx directory doesn't exist. Change it to:
mkdir -p /etc/nginx
touch /etc/nginx/.htpasswd
And it should be fine.
As noted in comments, the sudo isn't required or recommended there, so I removed it.
However, when I execute them one at a time manually they work just fine.
That's not possible. Something else must be creating the /etc/nginx directory later in your script or build process, but prior to you trying those commands manually. Maybe you install the nginx rpm later perhaps?

Related

Lambda gives No such file or directory(cant find the script file) error while running a bash script inside container. But this is successful in local

I am creating a lambda function from a docker image, this docker image actually runs a bash script inside of the docker container but when I tried to test that then it gives this following error. But this is successful in local. I tested with commented and uncommented entrypoint. Please help me to figure it out.
The dockerfile -
FROM amazon/aws-cli
USER root
ENV AWS_ACCESS_KEY_ID XXXXXXXXXXXXX
ENV AWS_SECRET_ACCESS_KEY XXXXXXXXXXXXX
ENV AWS_DEFAULT_REGION ap-south-1
# RUN mkdir /tmp
COPY main.sh /tmp
WORKDIR /tmp
RUN chmod +x main.sh
RUN touch file_path_final.txt
RUN touch file_path_initial.txt
RUN touch output_final.json
RUN touch output_initial.json
RUN chmod 777 file_path_final.txt
RUN chmod 777 file_path_initial.txt
RUN chmod 777 output_final.json
RUN chmod 777 output_initial.json
RUN yum install jq -y
# ENTRYPOINT ./main.sh ; /bin/bash
ENTRYPOINT ["/bin/sh", "-c" , "ls && ./tmp/main.sh"]
The error -
START RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Version: $LATEST
/bin/sh: ./tmp/main.sh: No such file or directory
/bin/sh: ./tmp/main.sh: No such file or directory
END RequestId: 8d689260-e500-45d7-aac8-ae260834ed96
REPORT RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Duration: 58.29 ms Billed Duration: 59 ms Memory Size: 128 MB Max Memory Used: 3 MB
RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Error: Runtime exited with error: exit status 127
Runtime.ExitError
Here how i did it to Run A C++ over a bash script :
#Pulling the node image from the AWS WCR PUBLIC docker hub.
FROM public.ecr.aws/lambda/provided:al2.2022.10.11.10
#Setting the working directory to /home.
WORKDIR ${LAMBDA_RUNTIME_DIR}
#Copying the contents of the current directory to the working directory.
COPY . .
#This is installing ffmpeg on the container.
RUN yum update -y
# Install sudo, wget and openssl, which is required for building CMake
RUN yum install sudo wget openssl-devel -y
# Install development tools
RUN sudo yum groupinstall "Development Tools" -y
# Download, build and install cmake
RUN yum install -y make
#RUN wget https://github.com/Kitware/CMake/releases/download/v3.22.3/cmake-3.22.3.tar.gz && tar -zxvf cmake-3.22.3.tar.gz && cd ./cmake-3.22.3 && ./bootstrap && make && sudo make install
RUN yum -y install gcc-c++ libcurl-devel cmake3 git
RUN ln -s /usr/bin/cmake3 /usr/bin/cmake
RUN ln -s /usr/bin/ctest3 /usr/bin/ctest
RUN ln -s /usr/bin/cpack3 /usr/bin/cpack
# get cmake versin
RUN cmake --version
RUN echo $(cmake --version)
#This is installing the nodejs and npm on the container.
RUN ./build.sh
RUN chmod 755 run.sh bootstrap
#This is running the nodejs application.
CMD [ "run.sh" ]
You will need a bootstrap file in the root directory : (FROM DOC)
#!/bin/sh
set -euo pipefail
# Initialization - load function handler
source $LAMBDA_RUNTIME_DIR/"$(echo $_HANDLER | cut -d. -f1).sh"
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event. The HTTP request will block until one is received
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
# Extract request ID by scraping response headers received above
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Run the handler function from the script
RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA")
# Send the response
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done

Installing OSSEC agent on a container. The ossec install script (install.sh) falls and loops infintely when passing arguments via script

Basically I am going to have a whole bunch of ubuntu containers that are going to have ossec agent installed that will communicate with a main server. I want to automate the installation so using the docker RUN variable in the dockerfile I wrote a script that downloads the ossec tar file, unpacks it, cds into directory and runs the install script while passing arguments to each question of the installation phase:
Dockerfile:
From ubuntu
RUN apt-get update && apt-get install -y \
build-essential \
libmysqlclient-dev \
postgresql-common \
wget \
tar \
RUN wget -U ossec https://bintray.com/artifact/download/ossec/ossec-hids/ossec-hids-2.8.3.tar.gz
RUN tar -xvf ossec-hids-2.8.3.gz && \
rm -f ossec-hids-2.8.3.tar.gz && \
cd ossec-hids-2.8.3 && \
echo "en agent \n 192.168.1.50 y y y" | ./install.sh
When it echos in the arguments into the script, the install.sh script falls and loops over the second question infinitely. Note I have tried printf, expect script, yes command and tried the script inside the container. All with the same outcome.

What is the proper way to script a new nginx instance with SSL on a new Ubuntu 16.04 server?

I have this so far but I'm missing a couple of things like getting the cron job scripted. Don't want to do this as root. So I'm assuming some more could be done to set up the first user at the same time. The script would need to be idempotent (can be run over and over again without risking changing anything if it was run with the same arguments before).
singledomaincertnginx.sh:
#!/bin/bash
if [ -z "$3" ]; then
echo use is "singledomaincertnginx.sh <server-ssh-address> <ssl-admin-email> <ssl-domain>"
echo example: "singledomaincertnginx.sh user#mydomain.com admin#mydomain.com some-sub-domain.mydomain.com"
exit
fi
ssh $1 "cat > ~/wks" << 'EOF'
#!/bin/bash
echo email: $1
echo domain: $2
sudo add-apt-repository -y ppa:certbot/certbot
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install -y software-properties-common
sudo apt-get install -y python-certbot-nginx
sudo apt-get install -y nginx
sudo sed -i "s/server_name .*;/server_name $2;/" /etc/nginx/sites-available/default
sudo systemctl restart nginx.service
if [[ -e /etc/letsencrypt/live/$2/fullchain.pem ]]; then
sudo certbot -n --nginx --agree-tos -m "$1" -d "$2"
fi
if [[ ! sudo crontab -l | grep certbot ]]; then
# todo: add cron job to renew: 15 3 * * * /usr/bin/certbot renew --quiet
EOF
ssh $1 "chmod +x ~/wks"
ssh -t $1 "bash -x -e ~/wks $2 $3"
I have this so far but I'm missing a couple of things like getting the cron job scripted.
Here's one way to complete (and correct) what you started:
if ! sudo crontab -l | grep certbot; then
echo "15 3 * * * /usr/bin/certbot renew --quiet" | sudo tee -a /var/spool/cron/crontabs/root >/dev/null
fi
Here's another way I prefer because it doesn't need to know the path of the crontabs:
if ! sudo crontab -l | grep certbot; then
sudo crontab -l | { cat; echo "15 3 * * * /usr/bin/certbot renew --quiet"; } | sudo crontab -
fi
Something I see missing is how the certificate file /etc/letsencrypt/live/$domain/fullchain.pem gets created.
Do you provide that by other means,
or do you need help with that part?
Don't want to do this as root.
Most of the steps involve running apt-get,
and for that you already require root.
Perhaps you meant that you don't want to do the renewals using root.
Some services operate as a dedicated user instead of root,
but looking through the documentation of certbot I haven't seen anything like that.
So it seems a common practice to do the renewals with root,
so adding the renewal command to root's crontab seems fine to me.
I would improve a couple of things in the script to make it more robust:
The positional parameters $1, $2 and so on scattered around are easy to lose track of, which could lead to errors. I would give them proper names.
The command line argument validation if [ -z "$3" ] is weak, I would make that more strict as if [ $# != 3 ].
Once the remote script is generated, you call it with bash -e, which is good for safeguarding. But if the script is called by something else without -e, the safeguard won't be there. It would be better to build that safeguard into the script itself with set -e. I would go further and use set -euo pipefail which is even more strict. And I would put that in the outer script too.
Most of the commands in the remote script require sudo. For one thing that's tedious to write. For another, if one command ends up taking a long time such that the sudo session expires, you may have to reenter the root password a second time, which will be annoying, especially if you stepped out for a coffee break. It would be better to require to always run as root, by adding a check on the uid of the executing user.
Since you run the remote script with bash -x ~/wks ... instead of just ~/wks, there's no need to make it executable with chmod, so that step can be dropped.
Putting the above together (and then some), I would write like this:
#!/bin/bash
set -euo pipefail
if [ $# != 3 ]; then
echo "Usage: $0 <server-ssh-address> <ssl-admin-email> <ssl-domain>"
echo "Example: singledomaincertnginx.sh user#mydomain.com admin#mydomain.com some-sub-domain.mydomain.com"
exit 1
fi
remote=$1
email=$2
domain=$3
remote_script_path=./wks
ssh $remote "cat > $remote_script_path" << 'EOF'
#!/bin/bash
set -euo pipefail
if [[ "$(id -u)" != 0 ]]; then
echo "This script must be run as root. (sudo $0)"
exit 1
fi
email=$1
domain=$2
echo email: $email
echo domain: $domain
add-apt-repository -y ppa:certbot/certbot
apt-get update
apt-get upgrade -y
apt-get install -y software-properties-common
apt-get install -y python-certbot-nginx
apt-get install -y nginx
sed -i "s/server_name .*;/server_name $domain;/" /etc/nginx/sites-available/default
systemctl restart nginx.service
#service nginx restart
if [[ -e /etc/letsencrypt/live/$domain/fullchain.pem ]]; then
certbot -n --nginx --agree-tos -m $email -d $domain
fi
if ! crontab -l | grep -q certbot; then
crontab -l | {
cat
echo
echo "15 3 * * * /usr/bin/certbot renew --quiet"
echo
} | crontab -
fi
EOF
ssh -t $remote "sudo bash -x $remote_script_path $email $domain"
Are you looking for something like this:
if [[ "$(grep '/usr/bin/certbot' /var/spool/cron/crontabs/$(whoami))" = "" ]]
then
echo "15 3 * * * /usr/bin/certbot renew --quiet" >> /var/spool/cron/crontabs/$(whoami)
fi
and the fi at the end
you can also avoid doing that much sudo by concatenating them like in:
sudo bash -c 'add-apt-repository -y ppa:certbot/certbot;apt-get update;apt-get upgrade -y;apt-get install -y software-properties-common python-certbot-nginx nginx;sed -i "s/server_name .*;/server_name $2;/" /etc/nginx/sites-available/default;systemctl restart nginx.service'
If you are doing this with sudo you are doing this as root
this is a simple thing to do in ansible, best do it there
to do the cron job do this:
CRON_FILE="/etc/cron.d/certbot"
if [ ! -f $CRON_FILE ] ; then
echo '15 3 * * * /usr/bin/certbot renew --quiet' > $CRON_FILE
fi
There are multiple ways to do this and they could be considered "proper" depending on the scenario.
One way to do it on boot time could be using cloud-init, For testing in the case of using AWS when creating the instance you could add your custom script:
This will allow running commands on launch of your instance, In case you would like to automate this process (infrastructure like code) you could use for example terraform
If for some reason you already have the instance up and running and just want to update on demand but not using ssh, you could use saltstack.
Talking about "Idempotency" Ansible could be also a very good tool for doing this, from the ansible glossary:
An operation is idempotent if the result of performing it once is exactly the same as the result of performing it repeatedly without any intervening actions.
There are many tools that can help you achieve this, only thing is to find the tool that adapts better to your needs/scenario.
Copy-paste solution for nginx + Ubuntu
Install dependencies
sudo apt-get install nginx -y
sudo apt-get install software-properties-common -y
sudo add-apt-repository universe -y
sudo add-apt-repository ppa:certbot/certbot -y
sudo apt-get update
sudo apt-get install certbot python-certbot-nginx -y
Get SSL certificate and redirect all traffic from http to https
certbot --nginx --agree-tos --redirect --noninteractive \
--email YOUR#EMAIL.COM \
--domain YOUR.DOMAIN.COM
Test renewal
certbot renew --dry-run
Docs
https://certbot.eff.org/lets-encrypt/ubuntuxenial-nginx

Failed to Call Access Method Exception when Creating a MedicationOrder in FHIR

I am using this http://fhirtest.uhn.ca/baseDstu2 test FHIR server and it worked okay so far.
Now I am getting an HTTP-500 - Failed to Call Access Method exception.
Anyone has any idea on what has gone wrong?
This happens frequently. Probably because someone tested weird queries or similar that put the server in an unstable status.
I suggest posting a comment in https://chat.fhir.org/#narrow/stream/hapi to get the server restarted,
or install http://hapifhir.io/doc_cli.html which does basically the same but you have full control.
I built a Dockerfile:
FROM debian:sid
MAINTAINER Günter Zöchbauer <guenter#yyy.com>
ENV DEBIAN_FRONTEND noninteractive
RUN \
apt-get -q update && \
DEBIAN_FRONTEND=noninteractive && \
apt-get install --no-install-recommends -y -q \
apt-transport-https \
apt-utils \
wget \
bzip2 \
default-jdk
# net-tools sudo procps telnet
RUN \
apt-get update && \
rm -rf /var/lib/apt/lists/*
https://github.com/jamesagnew/hapi-fhir/releases/download/v2.0/hapi-fhir-2.0-cli.tar.bz2 && \
ADD hapi-* /hapi_fhir_cli/
RUN ls -la
RUN ls -la /hapi_fhir_cli
ADD prepare_server.sh /hapi_fhir_cli/
RUN \
cd /hapi_fhir_cli && \
bash -c /hapi_fhir_cli/prepare_server.sh
ADD start.sh /hapi_fhir_cli/
WORKDIR /hapi_fhir_cli
EXPOSE 5555
ENTRYPOINT ["/hapi_fhir_cli/start.sh"]
Which requires in the same directory as the Dockerfile
prepare_server.sh
#!/usr/bin/env bash
ls -la
./hapi-fhir-cli run-server --allow-external-refs &
while ! timeout 1 bash -c "echo > /dev/tcp/localhost/8080"; do sleep 10; done
./hapi-fhir-cli upload-definitions -t http://localhost:8080/baseDstu2
./hapi-fhir-cli upload-examples -c -t http://localhost:8080/baseDstu2
start.sh
#!/usr/bin/env bash
cd /hapi_fhir_cli
./hapi-fhir-cli run-server --allow-external-refs -p 5555
Build
docker build myname/hapi_fhir_cli_dstu2 -t . #--no-cache
Run
docker run -d -p 5555:5555 [image id from docker build]
Hope this helps.

Why does "docker run" error with "no such file or directory"?

I am trying to run a container which runs an automated build. Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER pmandayam
# update dpkg repositories
RUN apt-get update
# install wget
RUN apt-get install -y wget
# get maven 3.2.2
RUN wget --no-verbose -O /tmp/apache-maven-3.2.2.tar.gz http://archive.apache.or
g/dist/maven/maven-3/3.2.2/binaries/apache-maven-3.2.2-bin.tar.gz
# verify checksum
RUN echo "87e5cc81bc4ab9b83986b3e77e6b3095 /tmp/apache-maven-3.2.2.tar.gz" | md5
sum -c
# install maven
RUN tar xzf /tmp/apache-maven-3.2.2.tar.gz -C /opt/
RUN ln -s /opt/apache-maven-3.2.2 /opt/maven
RUN ln -s /opt/maven/bin/mvn /usr/local/bin
RUN rm -f /tmp/apache-maven-3.2.2.tar.gz
ENV MAVEN_HOME /opt/maven
# remove download archive files
RUN apt-get clean
# set shell variables for java installation
ENV java_version 1.8.0_11
ENV filename jdk-8u11-linux-x64.tar.gz
ENV downloadlink http://download.oracle.com/otn-pub/java/jdk/8u11-b12/$filename
# download java, accepting the license agreement
RUN wget --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie
" -O /tmp/$filename $downloadlink
# unpack java
RUN mkdir /opt/java-oracle && tar -zxf /tmp/$filename -C /opt/java-oracle/
ENV JAVA_HOME /opt/java-oracle/jdk$java_version
ENV PATH $JAVA_HOME/bin:$PATH
# configure symbolic links for the java and javac executables
RUN update-alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 20000 &
& update-alternatives --install /usr/bin/javac javac $JAVA_HOME/bin/javac 20000
# install mongodb
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen
' | sudo tee /etc/apt/sources.list.d/mongodb.list && \
apt-get update && \
apt-get --allow-unauthenticated install -y mongodb-org mongodb-org-s
erver mongodb-org-shell mongodb-org-mongos mongodb-org-tools && \
echo "mongodb-org hold" | dpkg --set-selections && \
echo "mongodb-org-server hold" | dpkg --set-selections && \
echo "mongodb-org-shell hold" | dpkg --set-selections &&
\
echo "mongodb-org-mongos hold" | dpkg --set-selectio
ns && \
echo "mongodb-org-tools hold" | dpkg --set-selec
tions
RUN mkdir -p /data/db
VOLUME /data/db
EXPOSE 27017
COPY build-script /build-script
CMD ["/build-script"]
I can build the image successfully but when I try to run the container I get this error:
$ docker run mybuild
no such file or directory
Error response from daemon: Cannot start container 3e8aa828909afcd8fb82b5a5ac894
97a537bef2b930b71a5d20a1b98d6cc1dd6: [8] System error: no such file or directory
what does it mean 'no such file or directory'?
Here is my simple script:
#!/bin/bash
sudo service mongod start
mvn clean verify
sudo service mongod stop
I copy it like this: COPY build-script /build-script
and run it like this: CMD ["/build-script"] not sure why its not working
Using service isn't going to fly - the Docker base images are minimal and don't support this. If you want to run multiple processes, you can use supervisor or runit etc.
In this case, it would be simplest just to start mongo manually in the script e.g. /usr/bin/mongod & or whatever the correct incantation is.
BTW the lines where you try to clean up don't have much effect:
RUN rm -f /tmp/apache-maven-3.2.2.tar.gz
...
# remove download archive files
RUN apt-get clean
These files have already been committed to a previous image layer, so doing this doesn't save any disk-space. Instead you have to delete the files in the same Dockerfile instruction in which they're added.
Also, I would consider changing the base image to a Java one, which would save a lot of work. However, you may have trouble finding one which bundles the official Oracle JDK rather than OpenJDK if that's a problem.

Resources