Error in elastic beanstalk upstart script? - ruby

I have a AWS Elastic beanstalk app with ruby/puma configured. I see this in the instance's /etc/init. A puma.conf file with
$ cat /etc/init/puma.conf
description "Elastic Beanstalk Puma Upstart Manager"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
script
exec /bin/bash <<"EOF"
EB_SCRIPT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k script_dir)
EB_SUPPORT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k support_dir)
. $EB_SUPPORT_DIR/envvars
. $EB_SCRIPT_DIR/use-app-ruby.sh
if grep -eq 'true' /etc/elasticbeanstalk/has_puma.txt; then
exec su -s /bin/bash -c "bundle exec puma -C $EB_SUPPORT_DIR/conf/pumaconf.rb" webapp
else
exec su -s /bin/bash -c "puma -C $EB_SUPPORT_DIR/conf/pumaconf.rb" webapp
fi
EOF
end script
Is it just me or is this broken? Running the if condition(grep -eq 'true' /etc/elasticbeanstalk/has_puma.txt;) in the terminal throws an error. As does just running the entire if block. /etc/elasticbeanstalk/has_puma.txt contains one word true.
We discovered this because we were facing subtle app level issues which went away when we used bundle exec.
I was able to fix that by modifying the puma.conf thus,
$ cat /etc/init/puma.conf
description "Elastic Beanstalk Puma Upstart Manager"
EB_SUPPORT_DIR=/opt/elasticbeanstalk/support
start on runlevel [2345]
stop on runlevel [!2345]
respawn
script
exec /bin/bash <<"EOF"
EB_SCRIPT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k script_dir)
EB_SUPPORT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k support_dir)
. $EB_SUPPORT_DIR/envvars
. $EB_SCRIPT_DIR/use-app-ruby.sh
exec su -s /bin/bash -c "cd /var/app/current && bundle exec puma -C $EB_SUPPORT_DIR/conf/pumaconf.rb" webapp
EOF
end script
I had to add the cd for it to work.. not too sure about it.
So is this a bug with the elastic beanstalk system? Is this fix correct or is there a cleaner/better way to achieve this?

Related

$PATH not updated when running docker exec sh -c

I have the following script in a sh file running in the host:
printf '\n\n=== Installing asdf ...\n\n'
docker container exec "$CONTAINER_NAME" sh -c 'git clone https://github.com/asdf-vm/asdf.git /root/.asdf --branch v0.10.2'
docker container exec "$CONTAINER_NAME" sh -c 'echo ''. /root/.asdf/asdf.sh'' >> /root/.bashrc'
docker container exec "$CONTAINER_NAME" sh -c 'echo ''. /root/.asdf/completions/asdf.bash'' >> /root/.bashrc'
printf '\n\n=== Installing node/npm using asdf ...\n\n'
NODEJS_VERSION='17.9.0'
docker container exec "$CONTAINER_NAME" sh -c 'asdf plugin add nodejs'
docker container exec "$CONTAINER_NAME" sh -c "asdf install nodejs $NODEJS_VERSION"
docker container exec "$CONTAINER_NAME" sh -c "asdf global nodejs $NODEJS_VERSION"
When asdf plugin add nodejs line is executed I get the following error:
sh: 1: asdf: not found
The whole issue is happening because $PATH is not being updated after the installation of asdf. I tried:
to reload .bashrc/.profile after installing asdf
docker container exec "$CONTAINER_NAME" sh -c '. /root/.bashrc'
to restart the container:
docker "$CONTAINER_NAME" restart
The (not so) weird thing is when I get into the container I can use asdf because, as expected, $PATH contains the path to asdf folders.
Does anybody knows what I am missing here?
Each exec runs a new process which loses all its settings when it terminates. You need to start a new Bash shell with the correct options to read .bashrc ... or just give up on trying to use its interactive features for noninteractive scripts, and instead put these commands in a script, and then simply run it.
docker container exec "$container_name" bash '
printf "%s\n" "=== Installing asdf ..."
git clone https://github.com/asdf-vm/asdf.git /root/.asdf --branch v0.10.2
. /root/.asdf/asdf.sh
# . /root/.asdf/completions/asdf.bash
printf "%s\n" "=== Installing node/npm using asdf ..."
nodejs_version="17.9.0"
asdf plugin add nodejs
asdf install nodejs "$nodejs_version"
asdf global nodejs "$nodejs_version"'
I could not bring myself to keep all the newlines in your diagnostic messages.
Tangentially see also Correct Bash and shell script variable capitalization which explains why I changed the variables with the container name and the Node version to lower case.

Exec into kubernetes pod in a particular directory

How do I make this work?
[alan#stormfather-0be642-default-1 ~]$ kubectl exec -it my-pod-0 -- bash -c "/bin/bash && cd /tmp"
[root#my-pod-0 /]# pwd
/
Change directory first and then sh into it.
kubectl exec -it my-pod-0 -- bash -c "cd /tmp && /bin/bash"
Mohsin Amjad's answer is both simple and correct, if you are getting the
..."bash": executable file not found in $PATH...
error, this just means the container inside the pod does not have bash installed, instead try sh or other shells. I.e. something like:
kubectl exec -it my-pod-0 -- sh -c "cd /tmp && echo $0 $SHELL"

Running a bash script after the kafka-connect docker is up and running

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/KarthikDuggirala/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
# COPY script and make it executable
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN ["chmod", "+x", "/usr/share/kafka-connect-script/plugins-config.sh"]
#entrypoint
ENTRYPOINT [ "./usr/share/kafka-connect-script/plugins-config.sh" ]
and the following bash script
#!/bin/bash
#script to configure kafka connect with plugins
#export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
#export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=30
echo "Waiting for Kafka Connect to start listening on localhost"
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT"
while [[ $(eval $curl_command) -eq 000 ]]
do
echo "In"
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter"
echo "Going to sleep for $sleep_second seconds"
# sleep $sleep_second
echo "Finished sleeping"
# ((sleep_second_counter+=$sleep_second))
echo "Finished counter"
done
echo "Out"
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
I try to run the docker and using docker logs to see whats happening, and I am expecting that the script would run and wait till the kafka connect is started. But apparently after say few seconds the script or (I dont know what is hanging) hangs and I do not see any console prints anymore.
I am a bit lost what is wrong, so I need some guidance on what is that I am missing or is this not the correct way
What I am trying to do
I want to have logic defined that I could wait for kafka connect to start then run the curl command
curl -X POST -H "Content-Type: application/json" --data '{"name":"","config":{"connector.class":"com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector","topic":"fm_snmp"}}' http://localhost:8083/connectors
PS: I cannot use docker-compose way to do it, since there are places I have to use docker run
The problem here is that ENTRYPOINT will run when the container starts and will prevent the default CMD to run since the script will loop waiting for the server to be up , that is the script will loop forever since the CMD will not run.
you need to do one of the following:
start the kafka connect server in your Entrypoint and your script in CMD or running your script outside the container ....

Unable to Find Entrypoint For Nextcloud (Alpine-based Version) For a Cron Container

I'm using Docker with Rancher v1.6, setting up a Nextcloud stack.
I would like to use a dedicated container for running cron tasks every 15 minutes.
The "normal" Nextcloud Docker image can simply use the following:
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
while /bin/true; do
su -s "/bin/bash" -c "/usr/local/bin/php /var/www/html/cron.php" www-data
echo $$(date) - Running cron finished
sleep 900
done
EOF'
(Pulled from this GitHub post)
However, the Alpine-based image does not have bash, and so it cannot be used.
I found this script in the list of examples:
#!/bin/sh
set -eu
exec busybox crond -f -l 0 -L /dev/stdout
However, I cannot seem to get that working with my docker-compose.yml file.
I don't want to use an external file, just to have the script entirely in the docker-compose.yml file, to make preparation and changes a bit easier.
Thank you!

How to check if docker daemon is running?

I am trying to create a bash utility script to check if a docker daemon is running in my server.
Is there a better way of checking if the docker daemon is running in my server other than running a code like this?
ps -ef | grep docker
root 1250 1 0 13:28 ? 00:00:04 /usr/bin/dockerd --selinux-enabled
root 1598 1250 0 13:28 ? 00:00:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc
root 10997 10916 0 19:47 pts/0 00:00:00 grep --color=auto docker
I would like to create a bash shell script that will check if my docker daemon is running. If it is running then do nothing but if it is not then have the docker daemon started.
My pseudocode is something like this. I am thinking of parsing the output of my ps -ef but I just would like to know if there is a more efficient way of doing my pseudocode.
if(docker is not running)
run docker
end
P.S.
I am no linux expert and I just need to do this utility on my own environment.
I made a little Script (Mac Osx) to ensure Docker is running by checking the exit code of docker stats.
#!/bin/bash
#Open Docker, only if is not running
if (! docker stats --no-stream ); then
# On Mac OS this would be the terminal command to launch Docker
open /Applications/Docker.app
#Wait until Docker daemon is running and has completed initialisation
while (! docker stats --no-stream ); do
# Docker takes a few seconds to initialize
echo "Waiting for Docker to launch..."
sleep 1
done
fi
#Start the Container..
This works for me on Ubuntu
$ systemctl status docker
You have a utility called pgrep on almost all the Linux systems.
You can just do:
pgrep -f docker > /dev/null || echo "starting docker"
Replace the echo command with your docker starting command.
if curl -s --unix-socket /var/run/docker.sock http/_ping 2>&1 >/dev/null
then
echo "Running"
else
echo "Not running"
fi
Ref: Docker api v1.28
The following works on macOS and on Windows if git bash is installed. On macOS open /Applications/Docker.app would start the docker deamon. Haven't seen anything similar for Windows however.
## check docker is running at all
## based on https://stackoverflow.com/questions/22009364/is-there-a-try-catch-command-in-bash
{
## will throw an error if the docker daemon is not running and jump
## to the next code chunk
docker ps -q
} || {
echo "Docker is not running. Please start docker on your computer"
echo "When docker has finished starting up press [ENTER} to continue"
read
}
You can simply:
docker version > /dev/null 2>&1
The exit code of that command will be stored to $? so you can check if it's 0, then docker is running.
docker version will exit 1 if daemon is not running. If other issues are encountered, such as docker not being installed at all, the exit code will vary.
But in the end of the day, if docker is installed and daemon is running, the exit code will be 0.
The 2>&1 will redirect stderr to stdout and > /dev/null will redirect stdout to /dev/null practically silencing the output no matter what was the result of the execution.
You could also just check for the existence of /var/run/docker.pid.
Following #madsonic, I went for the following
#!/bin/bash
if (! docker stats --no-stream 2>/dev/null); then
# On Mac OS this would be the terminal command to launch Docker
open /Applications/Docker.app
echo -n "Waiting for Docker to launch"
sleep 1
# Wait until Docker daemon is running and has completed initialisation
while (! docker stats --no-stream >/dev/null 2>&1); do
# Docker takes a few seconds to initialize
echo -n "."
sleep 1
done
fi
echo
echo "Docker started"
A function could looks so:
isRunning {
`ps -ef | grep "[d]ocker" | awk {'print $2'}`
}
I created a script to start, stop, restart a mongodb-server.
You only need to change some path inside the scripts, and i also works for you:
Script
I'm sure you want to start the docker daemon so here's the code to start it before executing your Docker run statement:
sudo systemctl start docker

Resources