Running Elasticsearch-7.0 on a Travis Xenial build host - elasticsearch

The Xenial (Ubuntu 16.04) image on Travis-CI comes with Elasticsearch-5.5 preinstalled. What should I put in my .travis.yml to run my builds against Elasticsearch-7.0?

Add these commands to your before_install step:
- curl -s -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.0.1-amd64.deb
- sudo dpkg -i --force-confnew elasticsearch-7.0.1-amd64.deb
- sudo sed -i.old 's/-Xms1g/-Xms128m/' /etc/elasticsearch/jvm.options
- sudo sed -i.old 's/-Xmx1g/-Xmx128m/' /etc/elasticsearch/jvm.options
- echo -e '-XX:+DisableExplicitGC\n-Djdk.io.permissionsUseCanonicalPath=true\n-Dlog4j.skipJansi=true\n-server\n' | sudo tee -a /etc/elasticsearch/jvm.options
- sudo chown -R elasticsearch:elasticsearch /etc/default/elasticsearch
- sudo systemctl start elasticsearch
The changes to jvm.options are done in an attempt to emulate the existing config for Elasticsearch-5.5, which I assume the Travis peeps have actually thought about.
According to the Travis docs, you should also add this line to your before_script step:
- sleep 10
This is to ensure Elasticsearch is up and running, but I haven't checked if it's actually necessary.

One small addition to #kthy answer that had me stumbling for a bit. You need to remove - elasticsearch from your services: definition in the .travis.yml otherwise no matter what you put in before_install, the default service will override it!
services:
- elasticsearch
Remove ^^ and then you can proceed with the steps he outlined and it should all work smoothly.

if you want to wait for the elastic search to start (which may be longer or shorter than 10 seconds) replace the sleep 10 with this:
host="localhost:9200"
response=""
attempt=0
until [ "$response" = "200" ]; do
if [ $attempt -ge 25 ]; then
echo "FAILED. Elasticsearch not responding after $attempt tries."
exit 1
fi
echo "Contacting Elasticsearch on ${host}. Try number ${attempt}"
response=$(curl --write-out %{http_code} --silent --output /dev/null "$host")
sleep 1
attempt=$[$attempt+1]
done

Related

Heroku: "heroku ps:exec" not working when deploying a container into a dyno

I'm deploying a Tensorflow Serving container to Heroku, everything is working fine, but when I try to ssh into the container for executing some commands, Heroku returns this error:
C:\Users\whitm\Desktop\CodeProjects\deep-deblurring-serving>heroku ps:exec
Establishing credentials... error
! Could not connect to dyno!
! Check if the dyno is running with `heroku ps'
The Dyno is running correctly:
C:\Users\whitm\Desktop\CodeProjects\deep-deblurring-serving>heroku ps
Free dyno hours quota remaining this month: 550h 0m (100%)
Free dyno usage for this app: 0h 0m (0%)
For more information on dyno sleeping and how to upgrade, see:
https://devcenter.heroku.com/articles/dyno-sleeping
=== web (Free): /bin/sh -c bash\ heroku-exec.sh (1)
web.1: up 2020/04/11 19:13:51 -0400 (~ 38s ago)
I found a StackOverflow question from two years ago: Shell into a Docker container running on a Heroku dyno. How?. I already take care of all the details explained on the question, and the official Heroku docs about this specific situation: https://devcenter.heroku.com/articles/exec#using-with-docker, but I'm can't make this working.
This is my Dockerfile:
FROM tensorflow/serving
LABEL maintainer="Whitman Bohorquez" description="Build tf serving based image. This repo must be used as build context"
COPY / /
RUN apt-get update \
&& apt-get install -y git \
&& git reset --hard \
&& apt-get install -y curl \
&& apt-get install -y openssh-server
ENV MODEL_NAME=deblurrer
# Updates listening ports
RUN echo '#!/bin/bash \n\n\
tensorflow_model_server \
--rest_api_port=$PORT \
--model_name=${MODEL_NAME} \
--model_base_path=/models/${MODEL_NAME} \
"$#"' > /usr/bin/tf_serving_entrypoint.sh \
&& chmod +x /usr/bin/tf_serving_entrypoint.sh
# Setup symbolic link from sh to bash
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
CMD bash heroku-exec.sh
Special care in the line RUN rm /bin/sh && ln -s /bin/bash /bin/sh. Im installing Curl, OpenSSH, Python already in the container. I create the file heroku-exec.sh with [ -z "$SSH_CLIENT" ] && source <(curl --fail --retry 3 -sSL "$HEROKU_EXEC_URL") inside it, and successfully copy it into the /app/.profile.d folder, that is, the final route of the file is /app/.profile.d/heroku-exec.sh. Inclusive I tried do the last step as if the container is in a Heroku Private Space (that is not the case) but I will remove that.
Don't know what else to try, hope some help, I feel I'm doing something wrong with the heroku-exec.sh file, but what you think?
Thanks in advance!

Bash script fails when run via script

These commands, when run as a script, fails with error:
/etc/nginx/.htpasswd: No such file or directory
sudo touch /etc/nginx/.htpasswd
hash="$(echo -n "$MD5Password" | md5sum )"
echo "${ApplicationUserName}:$hash" >> /etc/nginx/.htpasswd
However, when I execute them one at a time manually they work just fine.
Complete code:
#!/bin/bash -x
yum -y update
yum install -y aws-cfn-bootstrap
yum install httpd-tools -y
echo
/opt/aws/bin/cfn-init --verbose --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}
sudo touch /etc/nginx/.htpasswd
hash="$(echo -n "$MD5Password" | md5sum )"
echo "${ApplicationUserName}:$hash" >> /etc/nginx/.htpasswd
This is part of user data I am passing in an AWS Cloudformation template.
What am I missing here?
The error message occurs because the /etc/nginx directory doesn't exist. Change it to:
mkdir -p /etc/nginx
touch /etc/nginx/.htpasswd
And it should be fine.
As noted in comments, the sudo isn't required or recommended there, so I removed it.
However, when I execute them one at a time manually they work just fine.
That's not possible. Something else must be creating the /etc/nginx directory later in your script or build process, but prior to you trying those commands manually. Maybe you install the nginx rpm later perhaps?

What is the proper way to script a new nginx instance with SSL on a new Ubuntu 16.04 server?

I have this so far but I'm missing a couple of things like getting the cron job scripted. Don't want to do this as root. So I'm assuming some more could be done to set up the first user at the same time. The script would need to be idempotent (can be run over and over again without risking changing anything if it was run with the same arguments before).
singledomaincertnginx.sh:
#!/bin/bash
if [ -z "$3" ]; then
echo use is "singledomaincertnginx.sh <server-ssh-address> <ssl-admin-email> <ssl-domain>"
echo example: "singledomaincertnginx.sh user#mydomain.com admin#mydomain.com some-sub-domain.mydomain.com"
exit
fi
ssh $1 "cat > ~/wks" << 'EOF'
#!/bin/bash
echo email: $1
echo domain: $2
sudo add-apt-repository -y ppa:certbot/certbot
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install -y software-properties-common
sudo apt-get install -y python-certbot-nginx
sudo apt-get install -y nginx
sudo sed -i "s/server_name .*;/server_name $2;/" /etc/nginx/sites-available/default
sudo systemctl restart nginx.service
if [[ -e /etc/letsencrypt/live/$2/fullchain.pem ]]; then
sudo certbot -n --nginx --agree-tos -m "$1" -d "$2"
fi
if [[ ! sudo crontab -l | grep certbot ]]; then
# todo: add cron job to renew: 15 3 * * * /usr/bin/certbot renew --quiet
EOF
ssh $1 "chmod +x ~/wks"
ssh -t $1 "bash -x -e ~/wks $2 $3"
I have this so far but I'm missing a couple of things like getting the cron job scripted.
Here's one way to complete (and correct) what you started:
if ! sudo crontab -l | grep certbot; then
echo "15 3 * * * /usr/bin/certbot renew --quiet" | sudo tee -a /var/spool/cron/crontabs/root >/dev/null
fi
Here's another way I prefer because it doesn't need to know the path of the crontabs:
if ! sudo crontab -l | grep certbot; then
sudo crontab -l | { cat; echo "15 3 * * * /usr/bin/certbot renew --quiet"; } | sudo crontab -
fi
Something I see missing is how the certificate file /etc/letsencrypt/live/$domain/fullchain.pem gets created.
Do you provide that by other means,
or do you need help with that part?
Don't want to do this as root.
Most of the steps involve running apt-get,
and for that you already require root.
Perhaps you meant that you don't want to do the renewals using root.
Some services operate as a dedicated user instead of root,
but looking through the documentation of certbot I haven't seen anything like that.
So it seems a common practice to do the renewals with root,
so adding the renewal command to root's crontab seems fine to me.
I would improve a couple of things in the script to make it more robust:
The positional parameters $1, $2 and so on scattered around are easy to lose track of, which could lead to errors. I would give them proper names.
The command line argument validation if [ -z "$3" ] is weak, I would make that more strict as if [ $# != 3 ].
Once the remote script is generated, you call it with bash -e, which is good for safeguarding. But if the script is called by something else without -e, the safeguard won't be there. It would be better to build that safeguard into the script itself with set -e. I would go further and use set -euo pipefail which is even more strict. And I would put that in the outer script too.
Most of the commands in the remote script require sudo. For one thing that's tedious to write. For another, if one command ends up taking a long time such that the sudo session expires, you may have to reenter the root password a second time, which will be annoying, especially if you stepped out for a coffee break. It would be better to require to always run as root, by adding a check on the uid of the executing user.
Since you run the remote script with bash -x ~/wks ... instead of just ~/wks, there's no need to make it executable with chmod, so that step can be dropped.
Putting the above together (and then some), I would write like this:
#!/bin/bash
set -euo pipefail
if [ $# != 3 ]; then
echo "Usage: $0 <server-ssh-address> <ssl-admin-email> <ssl-domain>"
echo "Example: singledomaincertnginx.sh user#mydomain.com admin#mydomain.com some-sub-domain.mydomain.com"
exit 1
fi
remote=$1
email=$2
domain=$3
remote_script_path=./wks
ssh $remote "cat > $remote_script_path" << 'EOF'
#!/bin/bash
set -euo pipefail
if [[ "$(id -u)" != 0 ]]; then
echo "This script must be run as root. (sudo $0)"
exit 1
fi
email=$1
domain=$2
echo email: $email
echo domain: $domain
add-apt-repository -y ppa:certbot/certbot
apt-get update
apt-get upgrade -y
apt-get install -y software-properties-common
apt-get install -y python-certbot-nginx
apt-get install -y nginx
sed -i "s/server_name .*;/server_name $domain;/" /etc/nginx/sites-available/default
systemctl restart nginx.service
#service nginx restart
if [[ -e /etc/letsencrypt/live/$domain/fullchain.pem ]]; then
certbot -n --nginx --agree-tos -m $email -d $domain
fi
if ! crontab -l | grep -q certbot; then
crontab -l | {
cat
echo
echo "15 3 * * * /usr/bin/certbot renew --quiet"
echo
} | crontab -
fi
EOF
ssh -t $remote "sudo bash -x $remote_script_path $email $domain"
Are you looking for something like this:
if [[ "$(grep '/usr/bin/certbot' /var/spool/cron/crontabs/$(whoami))" = "" ]]
then
echo "15 3 * * * /usr/bin/certbot renew --quiet" >> /var/spool/cron/crontabs/$(whoami)
fi
and the fi at the end
you can also avoid doing that much sudo by concatenating them like in:
sudo bash -c 'add-apt-repository -y ppa:certbot/certbot;apt-get update;apt-get upgrade -y;apt-get install -y software-properties-common python-certbot-nginx nginx;sed -i "s/server_name .*;/server_name $2;/" /etc/nginx/sites-available/default;systemctl restart nginx.service'
If you are doing this with sudo you are doing this as root
this is a simple thing to do in ansible, best do it there
to do the cron job do this:
CRON_FILE="/etc/cron.d/certbot"
if [ ! -f $CRON_FILE ] ; then
echo '15 3 * * * /usr/bin/certbot renew --quiet' > $CRON_FILE
fi
There are multiple ways to do this and they could be considered "proper" depending on the scenario.
One way to do it on boot time could be using cloud-init, For testing in the case of using AWS when creating the instance you could add your custom script:
This will allow running commands on launch of your instance, In case you would like to automate this process (infrastructure like code) you could use for example terraform
If for some reason you already have the instance up and running and just want to update on demand but not using ssh, you could use saltstack.
Talking about "Idempotency" Ansible could be also a very good tool for doing this, from the ansible glossary:
An operation is idempotent if the result of performing it once is exactly the same as the result of performing it repeatedly without any intervening actions.
There are many tools that can help you achieve this, only thing is to find the tool that adapts better to your needs/scenario.
Copy-paste solution for nginx + Ubuntu
Install dependencies
sudo apt-get install nginx -y
sudo apt-get install software-properties-common -y
sudo add-apt-repository universe -y
sudo add-apt-repository ppa:certbot/certbot -y
sudo apt-get update
sudo apt-get install certbot python-certbot-nginx -y
Get SSL certificate and redirect all traffic from http to https
certbot --nginx --agree-tos --redirect --noninteractive \
--email YOUR#EMAIL.COM \
--domain YOUR.DOMAIN.COM
Test renewal
certbot renew --dry-run
Docs
https://certbot.eff.org/lets-encrypt/ubuntuxenial-nginx

elasticsearch failed to start in centos

centos 6.7, elasticsearch 5
I have installed the elasticsearch using rpm. But failed to start it.
error: permission denied on key 'vm.max_map_count'
Starting elasticsearch: /usr/share/elasticsearch/bin/elasticsearch: line 198: 875 Killed exec "$JAVA" $ES_JAVA_OPTS -Des.path.home="$ES_HOME" -cp "$ES_CLASSPATH" org.elasticsearch.bootstrap.Elasticsearch "$#" 0>&-
[FAILED]
I think you should set vm.max_map_count to an appropriate value.
see https://www.elastic.co/guide/en/elasticsearch/reference/current/_maximum_map_count_check.html
and https://github.com/elastic/elasticsearch/issues/4978
Something like should solve your issue:
sudo sysctl -w vm.max_map_count=262144
e.g. edit (vi/vim) /etc/init.d/elastic_search (or however you spell it), and change,
CURRENT_MAX_MAP_COUNT=`sysctl vm.max_map_count | cut -d'=' -f2`;
if [ -n "$MAX_MAP_COUNT" -a -f /proc/sys/vm/max_map_count ]; then
if [ $MAX_MAP_COUNT -gt $CURRENT_MAX_MAP_COUNT ]; then
sysctl -q -w vm.max_map_count=$MAX_MAP_COUNT
fi
fi

How to start multiple processes for a Docker container in a bash script

I found very strange behaviour when I build and run docker container. I would like to have container with cassandra and ssh.
In my Dockerfile I've got:
RUN echo "deb http://www.apache.org/dist/cassandra/debian 20x main" | sudo tee -a /etc/apt/sources.list
RUN echo "deb-src http://www.apache.org/dist/cassandra/debian 20x main" | sudo tee -a /etc/apt/sources.list
RUN gpg --keyserver pgp.mit.edu --recv-keys 4BD736A82B5C1B00
RUN apt-key add ~/.gnupg/pubring.gpg
RUN apt-get update
RUN apt-get -y install cassandra
And then for ssh
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo '{{ docker_ssh_user }}:{{docker_ssh_password}}' | chpasswd
EXPOSE 22
And I added start script to run everything I want:
USER root
ADD start start
RUN chmod 777 start
CMD ["sh" ,"start"]
And here comes problem. When I have start like this below:
#!/bin/bash
/usr/sbin/sshd -D
/usr/sbin/cassandra -f
SSH is working well. I can do ssh root#172.17.0.x. After I log in container I try to run cqlsh to ensure that cassandra is working. But cassandra is not started for some reason and I can't access cqlsh. I've also checked /var/log/cassandra/ but it was empty.
In second scenario I change my start script to this:
#!/bin/bash
/usr/sbin/sshd -D & /usr/sbin/cassandra/ -f
And I again try to connect ssh root#172.17.0.x and then when I run cqlsh inside container I have connection to cqlsh.
So I was thinking that ampersand & is doing some voodoo that all works well ?
Why I can't run bash staring script with one command below another?
Or I'm missing something else??
Thanks for reading && helping.
Thanks to my friend linux guru we found the reason of error.
/usr/sbin/sshd -D means that -D : When this option is specified, sshd will not detach and does not become a deamon. This allows easy monitoring of sshd
So in the first script sshd -D was blocking next command to run.
In second script I've got & which let sshd -D go background and then cassandra could start.
Finally I've got this version of script:
#!/bin/bash
/usr/sbin/sshd
/usr/sbin/cassandra -f

Resources