Elasticsearch 5.6.5 cannot starup on Ubuntu14.04 - elasticsearch

All
I am new to elasticsearch. and I have installed the elasticsearch 5.6 on Ubuntu 14.04, with apt-get following the instruction on this page.
However, it fail to start when I was trying this command
sudo -i service elasticsearch start
Then I add a log line to check what the command really is.
log_daemon_msg "sudo -u $ES_USER $DAEMON $DAEMON_OPTS"
start-stop-daemon -d $ES_HOME --start --user "$ES_USER" -c "$ES_USER" --pidfile "$PID_FILE" --exec $DAEMON -- $DAEMON_OPTS
It returns
sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -d -p /var/run/elasticsearch/elasticsearch.pid -Edefault.path.logs=/var/log/elasticsearch -Edefault.path.data=/var/lib/elasticsearch -Edefault.path.conf=/etc/elasticsearch
I try to run is command directly, and the command exit without print any log. I also check the /var/log/elasticsearch, which is an empty dir.
Could anyone help to find out what do I miss?
Thanks.
BTW. I am running a 512M RAM, and 1G swap machine. and I have modifed the jvm.options to -Xms256m and -Xmx256m

Related

bash script not moving onto next command

I have a simple bash script as follows:
#!/bin/bash
gunicorn --bind 0.0.0.0:5000 --chdir server/ run:app -w 2
supervisord -c /usr/src/server/supervisord.conf -n
The script is supposed to launch the gunicorn service and then start supervisor. However when the script is called, only the gunciron service is started and not the supervisor service.
The script is being called as follows from a Dockerfile:
FROM python:3.8.1
RUN apt-get update && \
apt-get -y install netcat && \
apt-get clean
RUN apt-get install -y libpq-dev
RUN apt-get install -y redis-server
COPY . usr/src/server
WORKDIR /usr/src
RUN pip install -r server/requirements.txt
RUN chmod +x /usr/src/server/start.sh
CMD ["/usr/src/server/start.sh"]
I have tried using the && separator between the 2 commands as well, but that doesn't seem to make a difference.
The bash script will move on to the next command when the first command has finished running. supervisord will start when gunicorn exits.
To run Gunicorn in the background, pass the --daemon option.
#!/bin/bash
gunicorn --bind 0.0.0.0:5000 --chdir server/ run:app -w 2 --daemon
supervisord -c /usr/src/server/supervisord.conf -n
However, this doesn't make much sense to me. Since you're running Supervisor, why aren't you running Gunicorn from Supervisor? Starting daemons is Supervisor's job.

Running Elasticsearch-7.0 on a Travis Xenial build host

The Xenial (Ubuntu 16.04) image on Travis-CI comes with Elasticsearch-5.5 preinstalled. What should I put in my .travis.yml to run my builds against Elasticsearch-7.0?
Add these commands to your before_install step:
- curl -s -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.0.1-amd64.deb
- sudo dpkg -i --force-confnew elasticsearch-7.0.1-amd64.deb
- sudo sed -i.old 's/-Xms1g/-Xms128m/' /etc/elasticsearch/jvm.options
- sudo sed -i.old 's/-Xmx1g/-Xmx128m/' /etc/elasticsearch/jvm.options
- echo -e '-XX:+DisableExplicitGC\n-Djdk.io.permissionsUseCanonicalPath=true\n-Dlog4j.skipJansi=true\n-server\n' | sudo tee -a /etc/elasticsearch/jvm.options
- sudo chown -R elasticsearch:elasticsearch /etc/default/elasticsearch
- sudo systemctl start elasticsearch
The changes to jvm.options are done in an attempt to emulate the existing config for Elasticsearch-5.5, which I assume the Travis peeps have actually thought about.
According to the Travis docs, you should also add this line to your before_script step:
- sleep 10
This is to ensure Elasticsearch is up and running, but I haven't checked if it's actually necessary.
One small addition to #kthy answer that had me stumbling for a bit. You need to remove - elasticsearch from your services: definition in the .travis.yml otherwise no matter what you put in before_install, the default service will override it!
services:
- elasticsearch
Remove ^^ and then you can proceed with the steps he outlined and it should all work smoothly.
if you want to wait for the elastic search to start (which may be longer or shorter than 10 seconds) replace the sleep 10 with this:
host="localhost:9200"
response=""
attempt=0
until [ "$response" = "200" ]; do
if [ $attempt -ge 25 ]; then
echo "FAILED. Elasticsearch not responding after $attempt tries."
exit 1
fi
echo "Contacting Elasticsearch on ${host}. Try number ${attempt}"
response=$(curl --write-out %{http_code} --silent --output /dev/null "$host")
sleep 1
attempt=$[$attempt+1]
done

Can't find mongodump on Windows/Docker

I am trying to dump my mongodb, which is currently in a docker container on windows. I run the following command:
docker run --rm --link docker-mongodb_1:mongo --net docker_default -v /backup:/backup mongo bash -c "mongodump --out /backup/ --host mongo:27017"
The output is something like this (with no errors):
"writing db.entity to "
"done dumping db.entity"
However, I cannot find the actual dump. I have checked C:/backup, my local directory. Tried renaming the output and volumes, but with no luck. Does anyone know where the dump is stored?
I have been trying to do the same. I have written a shell script which does this process of backing up the data as you require. You first need to run the container with a name (whatever you wish that container name to be)
BACKUP_DIR="/backup"
DB_CONTAINER_ID=$(docker ps -aqf "name=<**name of your container**>")
NETWORK_ID=$(docker inspect -f "{{ .NetworkSettings.Networks.root_default.NetworkID }}" $DB_CONTAINER_ID)
docker run -v $BACKUP_DIR:/backup --network $NETWORK_ID mongo:3.4 su -c "cd /backup && mongodump -h db -u <username> -p <password> --authenticationDatabase <db_name> --db <db_name>"
tar -zcvf $BACKUP_DIR/db.tgz $BACKUP_DIR/dump
rm -rf $BACKUP_DIR/dump

Docker centos7 systemctl deos not work : Failed to connect D-bus

I am trying to run elasticsearch on docker.
My features like below
host system : OSX 10.12.5
docker : 17.05.0-ce
docker operating image : centos:latest
I was following this article, but it stuck with systemctl daemon-reload.
I found CentOS official respond about this D-bus bug, but when I ran docker run command it shows the message below.
[!!!!!!] Failed to mount API filesystems, freezing.
How could I solve this problem?
FYI, Here is Dockerfile what I build image
FROM centos
MAINTAINER juneyoung <juneyoung#hanmail.net>
ARG u=elastic
ARG uid=1000
ARG g=elastic
ARG gid=1000
ARG p=elastic
# add USER
RUN groupadd -g ${gid} ${g}
RUN useradd -d /home/${u} -u ${uid} -g ${g} -s /bin/bash ${u}
# systemctl settings from official Centos github
# https://github.com/docker-library/docs/tree/master/centos#systemd-integration
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
# yum settings
RUN yum -y update
RUN yum -y install java-1.8.0-openjdk.x86_64
ENV JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-3.b12.el7_3.x86_64/jre/
# install wget
RUN yum install -y wget
# install net-tools : netstat, ifconfig
RUN yum install -y net-tools
# Elasticsearch install
ENV ELASTIC_VERSION=5.4.0
RUN rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
RUN wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-${ELASTIC_VERSION}.rpm
RUN rpm -ivh elasticsearch-${ELASTIC_VERSION}.rpm
CMD ["/usr/sbin/init"]
and I have ran with command
docker run -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro --name=elastic2 elastic2
First, thanks to #Robert.
I did not think it that way.
All I have to do is just edit my CMD command.
Change that to
CMD["elasticsearch"]
However, have to some chores to access from the browser.
refer this elasticsearch forum post.
You could follow the commands for a systemd-enabled OS if you would replace the normal systemctl command. That's how I do install elasticsearch in a centos docker container.
See "docker-systemctl-replacement" for the details.

How to start multiple processes for a Docker container in a bash script

I found very strange behaviour when I build and run docker container. I would like to have container with cassandra and ssh.
In my Dockerfile I've got:
RUN echo "deb http://www.apache.org/dist/cassandra/debian 20x main" | sudo tee -a /etc/apt/sources.list
RUN echo "deb-src http://www.apache.org/dist/cassandra/debian 20x main" | sudo tee -a /etc/apt/sources.list
RUN gpg --keyserver pgp.mit.edu --recv-keys 4BD736A82B5C1B00
RUN apt-key add ~/.gnupg/pubring.gpg
RUN apt-get update
RUN apt-get -y install cassandra
And then for ssh
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo '{{ docker_ssh_user }}:{{docker_ssh_password}}' | chpasswd
EXPOSE 22
And I added start script to run everything I want:
USER root
ADD start start
RUN chmod 777 start
CMD ["sh" ,"start"]
And here comes problem. When I have start like this below:
#!/bin/bash
/usr/sbin/sshd -D
/usr/sbin/cassandra -f
SSH is working well. I can do ssh root#172.17.0.x. After I log in container I try to run cqlsh to ensure that cassandra is working. But cassandra is not started for some reason and I can't access cqlsh. I've also checked /var/log/cassandra/ but it was empty.
In second scenario I change my start script to this:
#!/bin/bash
/usr/sbin/sshd -D & /usr/sbin/cassandra/ -f
And I again try to connect ssh root#172.17.0.x and then when I run cqlsh inside container I have connection to cqlsh.
So I was thinking that ampersand & is doing some voodoo that all works well ?
Why I can't run bash staring script with one command below another?
Or I'm missing something else??
Thanks for reading && helping.
Thanks to my friend linux guru we found the reason of error.
/usr/sbin/sshd -D means that -D : When this option is specified, sshd will not detach and does not become a deamon. This allows easy monitoring of sshd
So in the first script sshd -D was blocking next command to run.
In second script I've got & which let sshd -D go background and then cassandra could start.
Finally I've got this version of script:
#!/bin/bash
/usr/sbin/sshd
/usr/sbin/cassandra -f

Resources