I tried to run Elastic Search on Docker from my Mac successfully. However, I don't know how to find AKS's VM/Linux system to install Elastic Search. There is no specific guideline from this elastic document.
Supplement for steps that I performed:
# CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group XXX --name YYY --query nodeResourceGroup -o tsv)
# SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
# az vmss extension set \
--resource-group $CLUSTER_RESOURCE_GROUP \
--vmss-name $SCALE_SET_NAME \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings "{\"username\":\"azureuser\", \"ssh_key\":\"$(cat ~/.ssh/id_rsa.pub)\"}"
# az vmss update-instances --instance-ids '*' \
--resource-group $CLUSTER_RESOURCE_GROUP \
--name $SCALE_SET_NAME
# kubectl get nodes -o wide
# az vm list --resource-group $CLUSTER_RESOURCE_GROUP -o table
# az vm list-ip-addresses --resource-group $CLUSTER_RESOURCE_GROUP -o table
# kubectl run --generator=run-pod/v1 -it --rm aks-ssh --image=debian
// Inside aks-ssh
apt-get update && apt-get install openssh-client -y
// Open another terminal then copy SSH key
# kubectl cp ~/.ssh/id_rsa $(kubectl get pod -l run=aks-ssh -o jsonpath='{.items[0].metadata.name}'):/id_rsa
// Inside aks-ssh again
#chmod 0600 id_rsa
// Connect to vmss/VM:
#ssh -i id_rsa azureuser#10.240.0.4
// -- in VMSS --
// Download Metricbeat(use deb)
# curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.5.0-amd64.deb
# sudo dpkg -i metricbeat-7.5.0-amd64.deb
// Modify metricbeat.yml:
# sudo nano /etc/metricbeat/metricbeat.yml
// Add below in “Elastic Cloud” section
cloud.id: "<--id-->"
cloud.auth: "<--auth-->"
// Enable Kubernetes
# sudo metricbeat modules enable kubernetes
// Modify kubernetes.yml
# sudo nano /etc/metricbeat/modules.d/kubernetes.yml
// Start Metricbeat
# sudo metricbeat setup
# sudo service metricbeat start
If you want to setup Elastic on ubuntu or VM you can simply install deb package using dpkg or using apt-get install. you can configure ES service and start running it.
You can find more details over here : https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html
If you are looking for whole solution Elasticsearch, logstash and kibana you can follow : https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-16-04
https://linuxize.com/post/how-to-install-elasticsearch-on-ubuntu-18-04/
However if you are looking for solution to install on Kubernetes you can use helm chart.
https://github.com/elastic/helm-charts/tree/master/elasticsearch
Related
I have an openstack server in which i want to create an instance with user data file for example
openstack server create --flavor 2 --image 34bf1632-86ed-46ca-909e-c6ace830f91f --nic net-id=d444145e-3ccb-4685-88ee --security-group default --key-name Adeel --user-data ./adeel/script.sh m3
script.sh contain
#cloud-config
password: mypasswd
chpasswd: { expire: False }
ssh_pwauth: True
#!/bin/sh
wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-7.17.7-linux-x86_64.tar.gz && tar -xzf elastic-agent-7.17.7-linux-x86_64.tar.gz cd
elastic-agent-7.17.7-linux-x86_64 sudo ./elastic-agent install \
--fleet-server-es=http://localhost:9200 \
--fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2Njc0MDM1 \
--fleet-server-policy=499b5aa7-d214-5b5d \
--fleet-server-insecure-http
when i add this script nothing executed. i want run above script when my instance boot first time.
I installed Elasticsearch agent to VM of AKS. There is no data to send to Kibana cloud(which is not install outside AKS). When I tested Elastecsearch modules, there is error:
azureuser#aks-agentpool-yyyyyy-0:/var/log/elasticsearch$ sudo metricbeat test modules
Error getting metricbeat modules:
module initialization error:
5 errors:
reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
Checked that the error should be based on below code in kubernetes.yml
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
How can I create secret on Azure VM to solve this problem?
Supplements
I follow https://learn.microsoft.com/en-us/azure/aks/ssh to connect to VMSS / VM then install Metricbeat already. But there is no data to send to Kibana.
Below are the steps that I performed:
# CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group XXX --name YYY --query nodeResourceGroup -o tsv)
# SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
# az vmss extension set \
--resource-group $CLUSTER_RESOURCE_GROUP \
--vmss-name $SCALE_SET_NAME \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings "{\"username\":\"azureuser\", \"ssh_key\":\"$(cat ~/.ssh/id_rsa.pub)\"}"
# az vmss update-instances --instance-ids '*' \
--resource-group $CLUSTER_RESOURCE_GROUP \
--name $SCALE_SET_NAME
# kubectl get nodes -o wide
# az vm list --resource-group $CLUSTER_RESOURCE_GROUP -o table
# az vm list-ip-addresses --resource-group $CLUSTER_RESOURCE_GROUP -o table
# kubectl run --generator=run-pod/v1 -it --rm aks-ssh --image=debian
// Inside aks-ssh
apt-get update && apt-get install openssh-client -y
// Open another terminal then copy SSH key
# kubectl cp ~/.ssh/id_rsa $(kubectl get pod -l run=aks-ssh -o jsonpath='{.items[0].metadata.name}'):/id_rsa
// Inside aks-ssh again
#chmod 0600 id_rsa
// Connect to vmss/VM:
#ssh -i id_rsa azureuser#10.240.0.4
// -- in VMSS --
// Download Metricbeat(use deb)
# curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.5.0-amd64.deb
# sudo dpkg -i metricbeat-7.5.0-amd64.deb
// Modify metricbeat.yml:
# sudo nano /etc/metricbeat/metricbeat.yml
// Add below in “Elastic Cloud” section
cloud.id: "<--id-->"
cloud.auth: "<--auth-->"
// Enable Kubernetes
# sudo metricbeat modules enable kubernetes
// Modify kubernetes.yml
# sudo nano /etc/metricbeat/modules.d/kubernetes.yml
// Start Metricbeat
# sudo metricbeat setup
# sudo service metricbeat start
I am trying to run elasticsearch on docker.
My features like below
host system : OSX 10.12.5
docker : 17.05.0-ce
docker operating image : centos:latest
I was following this article, but it stuck with systemctl daemon-reload.
I found CentOS official respond about this D-bus bug, but when I ran docker run command it shows the message below.
[!!!!!!] Failed to mount API filesystems, freezing.
How could I solve this problem?
FYI, Here is Dockerfile what I build image
FROM centos
MAINTAINER juneyoung <juneyoung#hanmail.net>
ARG u=elastic
ARG uid=1000
ARG g=elastic
ARG gid=1000
ARG p=elastic
# add USER
RUN groupadd -g ${gid} ${g}
RUN useradd -d /home/${u} -u ${uid} -g ${g} -s /bin/bash ${u}
# systemctl settings from official Centos github
# https://github.com/docker-library/docs/tree/master/centos#systemd-integration
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
# yum settings
RUN yum -y update
RUN yum -y install java-1.8.0-openjdk.x86_64
ENV JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-3.b12.el7_3.x86_64/jre/
# install wget
RUN yum install -y wget
# install net-tools : netstat, ifconfig
RUN yum install -y net-tools
# Elasticsearch install
ENV ELASTIC_VERSION=5.4.0
RUN rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
RUN wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-${ELASTIC_VERSION}.rpm
RUN rpm -ivh elasticsearch-${ELASTIC_VERSION}.rpm
CMD ["/usr/sbin/init"]
and I have ran with command
docker run -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro --name=elastic2 elastic2
First, thanks to #Robert.
I did not think it that way.
All I have to do is just edit my CMD command.
Change that to
CMD["elasticsearch"]
However, have to some chores to access from the browser.
refer this elasticsearch forum post.
You could follow the commands for a systemd-enabled OS if you would replace the normal systemctl command. That's how I do install elasticsearch in a centos docker container.
See "docker-systemctl-replacement" for the details.
I'm working on https://github.com/audip/rpi-haproxy and get this error message when building the docker container:
Build failed: The command '/bin/sh -c echo "deb http://httpredir.debian.org/debian jessie-backports main" >> /etc/apt/sources.list' returned a non-zero code: 1
This can be viewed at https://hub.docker.com/r/audip/rpi-haproxy/builds/brxdkayq3g45jjhppndcwnb/
I tried to find answers, but the problem seems to be something off on Line 4 of the Dockerfile. Need help to fix this build from failing.
# Pull base image.
FROM resin/rpi-raspbian:latest
# Enable Jessie backports
RUN echo "deb http://httpredir.debian.org/debian jessie-backports main" >> /etc/apt/sources.list
# Setup GPG keys
RUN gpg --keyserver pgpkeys.mit.edu --recv-key 8B48AD6246925553 \
&& gpg -a --export 8B48AD6246925553 | sudo apt-key add - \
&& gpg --keyserver pgpkeys.mit.edu --recv-key 7638D0442B90D010 \
&& gpg -a --export 7638D0442B90D010 | sudo apt-key add -
# Install HAProxy
RUN apt-get update \
&& apt-get install haproxy -t jessie-backports
# Define working directory.
WORKDIR /usr/local/etc/haproxy/
# Copy config file to container
COPY haproxy.cfg .
COPY start.bash .
# Define mountable directories.
VOLUME ["/haproxy-override"]
# Run loadbalancer
# CMD ["haproxy", "-f", "/usr/local/etc/haproxy/haproxy.cfg"]
# Define default command.
CMD ["bash", "start.bash"]
# Expose ports.
EXPOSE 80
EXPOSE 443
From your logs:
standard_init_linux.go:178: exec user process caused "exec format error"
It's complaining about an invalid binary format. The image you are using is a Raspberry Pi image, which would be based on an ARM chipset. Your build is running on an AMD64 chipset. These are not binary compatible. I believe this image is designed to be built on a Pi itself.
I would like to run Hadoop and Flume dockerized. I have a standard Hadoop image with all the default values. I cannot see how can these services communicate each other placed in separated containers.
Flume's Dockerfile looks like this:
FROM ubuntu:14.04.4
RUN apt-get update && apt-get install -q -y --no-install-recommends wget
RUN mkdir /opt/java
RUN wget --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" -qO- \
https://download.oracle.com/otn-pub/java/jdk/8u20-b26/jre-8u20-linux-x64.tar.gz \
| tar zxvf - -C /opt/java --strip 1
RUN mkdir /opt/flume
RUN wget -qO- http://archive.apache.org/dist/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz \
| tar zxvf - -C /opt/flume --strip 1
ADD flume.conf /var/tmp/flume.conf
ADD start-flume.sh /opt/flume/bin/start-flume
ENV JAVA_HOME /opt/java
ENV PATH /opt/flume/bin:/opt/java/bin:$PATH
CMD [ "start-flume" ]
EXPOSE 10000
You should link your containers. There are some variants how you can implement this.
1) Publish ports:
docker run -p 50070:50070 hadoop
option p binds port 50070 of your docker container with port 50070 of host machine
2) Link containers (using docker-compose)
docker-compose.yml
version: '2'
services:
hadoop:
image: hadoop:2.6
flume:
image: flume:last
links:
- hadoop
link option here binds your flume container with hadoop
more info about this https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/