Elasticsearch Docker image not running - elasticsearch

I'm trying to build a docker container running elasticsearch v2 using this dockerfile (I'm using RHEL 6 and docker 1.7.1):
FROM partlab/ubuntu-java
MAINTAINER RĂ©gis Gaidot <regis#partlab.co>
ENV DEBIAN_FRONTEND noninteractive
ENV INITRD No
ENV LANG en_US.UTF-8
ENV PATH=$PATH:/usr/share/elasticsearch/bin
RUN wget -qO - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add - && \
echo 'deb http://packages.elasticsearch.org/elasticsearch/2.x/debian stable main' \
| tee /etc/apt/sources.list.d/elasticsearch.list && \
apt-get update && \
apt-get install --no-install-recommends -y elasticsearch && \
/usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
WORKDIR /usr/share/elasticsearch
RUN set -ex \
&& for path in \
./data \
./logs \
./config \
./config/scripts \
; do \
mkdir -p "$path"; \
chown -R elasticsearch:elasticsearch "$path"; \
done
COPY ./config /usr/share/elasticsearch/config
USER elasticsearch
VOLUME ["/usr/share/elasticsearch"]
EXPOSE 9200 9300
CMD ["elasticsearch"]
I also pulled the prebuilt image using:
docker pull partlab/ubuntu-elasticsearch
But unfortunately every time I use the following docker run command the container exits:
docker run -d -p 9200:9200 -p 9300:9300 --net=bridge --name elastic_container -v /home/my_project/elastic_data
partlab/ubuntu-elasticsearch
or this one:
docker run -d -p 9200:9200 -p 9300:9300 --net=bridge --name elastic_container partlab/ubuntu-elasticsearch
Here is the result of docker logs:
[2017-08-28 10:54:16,087][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: CONFIG_SECCOMP no
t compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed
Exception in thread "main" java.lang.IllegalStateException: Unable to access 'path.data' (/usr/share/elasticsearch/data/elasticsearc
h)
Likely root cause: java.nio.file.FileSystemException: /usr/share/elasticsearch/data/elasticsearch: No space left on device
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:337)
at org.elasticsearch.bootstrap.Security.addPath(Security.java:314)
at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:259)
at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:212)
at org.elasticsearch.bootstrap.Security.configure(Security.java:118)
at org.elasticsearch.bootstrap.Bootstrap.setupSecurity(Bootstrap.java:196)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:167)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
When I build the same image on my mac using docker 17.6 it works flawlessly.
Storage space using df -h on my system session:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_lptdlp01-lv_root
99G 96G 0 100% /
tmpfs 32G 1.2M 32G 1% /dev/shm
/dev/sda1 477M 67M 385M 15% /boot
/dev/mapper/vg_lptdlp01-lv_home
886G 105G 737G 13% /home
cm_processes 32G 38M 32G 1% /var/run/cloudera-scm-agent/process
Storage space using df -h in a ubuntu container:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:0-967159-ac79433eebd0c389290f.... 9.8G 7.7G 1.6G 84% /
tmpfs 32G 0 32G 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/mapper/vg_lptdlp01-lv_root 99G 96G 0 100% /etc/hosts

Related

Bash script Partition skip if already exists

I am trying to create partition using the below script. It works, but sometimes I need to re-run the same script due to some automation stuff during the time I am getting error like already in use.
parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
mkfs.xfs /dev/sdc1
mkdir -p /opt/app
lsblk
echo "/dev/sdc1 /opt/app xfs defaults 0 1" >>/etc/fstab
mount -a
df -Th
Error log when I executing the same script again.
+ parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
Error: Partition(s) on /dev/sdc are being used.
+ mkfs.xfs /dev/sdc1
mkfs.xfs: /dev/sdc1 contains a mounted filesystem
How can I skip if the file system from /dev/sdc is already mounted?
You can check if the mount is already there, and skip the script if it's found;
if grep -qs '/dev/sdc' /proc/mounts; then
echo "Skipping mount since /dev/sdc already exists"
else
parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
mkfs.xfs /dev/sdc1
mkdir -p /opt/app
lsblk
echo "/dev/sdc1 /opt/app xfs defaults 0 1" >>/etc/fstab
mount -a
df -Th
fi

Expanding HDFS memory in Cloudera QuickStart on docker

I try to use the Cloudera QuickStart Docker Image, but it seems that there is no free space on hdfs (0 Bytes).
After starting the Container
docker run --hostname=$HOSTNAME -p 80:80 -p 7180:7180 -p 8032:8032 -p 8030:8030 -p 8888:8888 -p 8983:8983 -p 50070:50070 -p 50090:50090 -p 50075:50075 -p 50030:50030 -p 50060:50060 -p 60010:60010 -p 60030:60030 -p 9095:9095 -p 8020:8020 -p 8088:8088 -p 4040:4040 -p 18088:18088 -p 10020:10020 --privileged=true -t -i cloudera/quickstart /usr/bin/docker-quickstart
I can start the Cloudera Manager
$/home/cloudera/cloudera-manager --express
and to log into the web gui.
Here I can see that dfs.datanode.data.dir is the default /var/lib/hadoop-hdfs/cache/hdfs/dfs/data
On the console my hdfs dfsadmin -report gives me:
hdfs dfsadmin -report
Safe mode is ON
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
But when I look at the the container
df -h
Filesystem Size Used Avail Use% Mounted on
overlay 63G 8.3G 52G 14% /
tmpfs 64M 0 64M 0% /dev
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/sda1 63G 8.3G 52G 14% /etc/resolv.conf
/dev/sda1 63G 8.3G 52G 14% /etc/hostname
/dev/sda1 63G 8.3G 52G 14% /etc/hosts
shm 64M 0 64M 0% /dev/shm
cm_processes 5.9G 7.8M 5.9G 1% /var/run/cloudera-scm-agent/process
What I have to do to add additional space to hfs?
Here I can see that dfs.datanode.data.dir is the default /var/lib/hadoop-hdfs/cache/hdfs/dfs/data
You can use a volume mount into that directory.
More importantly, running df within a container is misleading, and on Mac or Windows, the Docker qcow2 file is only a limited size to begin with.
How do you get around the size limitation of Docker.qcow2 in the Docker for Mac?
It looks like you have no datanodes running

HDFS as volume in cloudera quickstart docker

I am fairly new to both hadoop and docker.
I haven been working on extending the cloudera/quickstart docker image docker file and wanted to mount a directory form host and map it to hdfs location, so that performance is increased and data are persist localy.
When i mount volume anywhere with -v /localdir:/someDir everything works fine, but that's not my goal. But when i do -v /localdir:/var/lib/hadoop-hdfs both datanode and namenode fails to start and I get : "cd /var/lib/hadoop-hdfs: Permission denied". And when i do -v /localdir:/var/lib/hadoop-hdfs/cache no permission denied but datanode and namenode, or one of them fails to start on starting the docker image and i can't find any useful information in log files about the reason for that.
Mayby someone came across this problem, or have some other solution for putting hdfs outside the docker container?
I've the same problem and I've managed the situation copying the entire /var/lib directory from container to a local directory
From terminal, start the cloudera/quickstart container without start all hadoop services:
docker run -ti cloudera/quickstart /bin/bash
In another terminal copy the container directory to the local directory
:
mkdir /local_var_lib
docker exec your_container_id tar Ccf $(dirname /var/lib) - $(basename /var/lib) | tar Cxf /local_var_lib -
After all files copied from container to local dir, stop the container and point the /var/lib to the new target. Make sure the /local_var_lib directory contains the hadoop directories (hbase, hadoop-hdfs, oozie, mysql, etc).
Start the container:
docker run --name cloudera \
--hostname=quickstart.cloudera \
--privileged=true \
-td \
-p 2181:2181 \
-p 8888:8888 \
-p 7180:7180 \
-p 6680:80 \
-p 7187:7187 \
-p 8079:8079 \
-p 8080:8080 \
-p 8085:8085 \
-p 8400:8400 \
-p 8161:8161 \
-p 9090:9090 \
-p 9095:9095 \
-p 60000:60000 \
-p 60010:60010 \
-p 60020:60020 \
-p 60030:60030 \
-v /local_var_lib:/var/lib \
cloudera/quickstart /usr/bin/docker-quickstart
You should run a
docker exec -it "YOUR CLOUDERA CONTAINER" chown -R hdfs:hadoop /var/lib/hadoop-hdfs/

Docker intercontainer communication

I would like to run Hadoop and Flume dockerized. I have a standard Hadoop image with all the default values. I cannot see how can these services communicate each other placed in separated containers.
Flume's Dockerfile looks like this:
FROM ubuntu:14.04.4
RUN apt-get update && apt-get install -q -y --no-install-recommends wget
RUN mkdir /opt/java
RUN wget --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" -qO- \
https://download.oracle.com/otn-pub/java/jdk/8u20-b26/jre-8u20-linux-x64.tar.gz \
| tar zxvf - -C /opt/java --strip 1
RUN mkdir /opt/flume
RUN wget -qO- http://archive.apache.org/dist/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz \
| tar zxvf - -C /opt/flume --strip 1
ADD flume.conf /var/tmp/flume.conf
ADD start-flume.sh /opt/flume/bin/start-flume
ENV JAVA_HOME /opt/java
ENV PATH /opt/flume/bin:/opt/java/bin:$PATH
CMD [ "start-flume" ]
EXPOSE 10000
You should link your containers. There are some variants how you can implement this.
1) Publish ports:
docker run -p 50070:50070 hadoop
option p binds port 50070 of your docker container with port 50070 of host machine
2) Link containers (using docker-compose)
docker-compose.yml
version: '2'
services:
hadoop:
image: hadoop:2.6
flume:
image: flume:last
links:
- hadoop
link option here binds your flume container with hadoop
more info about this https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/

Use parted into a Chef recipe to build a partition

Just starting on chef, I'm trying to convert my old bash provisioning scripts to something more modern and realiable, using Chef.
The first script is somewhat i used to build a partition and mount it to /opt.
This is the script: https://github.com/theclue/db2-vagrant/blob/master/provision_for_mount_disk.sh
#!/bin/bash
yum install -y parted
parted /dev/sdb mklabel msdos
parted /dev/sdb mkpart primary ext4 0% 100%
sleep 3
#-m swith tells mkfs to only reserve 1% of the blocks for the super block
mkfs.ext4 /dev/sdb1
e2label /dev/sdb1 "opt"
######### mount sdb1 to /opt ##############
chmod 777 /opt
mount /dev/sdb1 /opt
chmod 777 /opt
echo '/dev/sdb1 /opt ext4 defaults 0 0' >> /etc/fstab
I found a parted recipe here, but it doesn't seem to support all the parameters I need (0% and 100%, to name two) and anyway I've no idea on how to do the formatting/mounting block.
any idea?

Resources