How to fix this Elasticsearch agent on VM of Azure Kubernetes Services? - elasticsearch

I installed Elasticsearch agent to VM of AKS. There is no data to send to Kibana cloud(which is not install outside AKS). When I tested Elastecsearch modules, there is error:
azureuser#aks-agentpool-yyyyyy-0:/var/log/elasticsearch$ sudo metricbeat test modules
Error getting metricbeat modules:
module initialization error:
5 errors:
reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
Checked that the error should be based on below code in kubernetes.yml
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
How can I create secret on Azure VM to solve this problem?
Supplements
I follow https://learn.microsoft.com/en-us/azure/aks/ssh to connect to VMSS / VM then install Metricbeat already. But there is no data to send to Kibana.
Below are the steps that I performed:
# CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group XXX --name YYY --query nodeResourceGroup -o tsv)
# SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
# az vmss extension set \
--resource-group $CLUSTER_RESOURCE_GROUP \
--vmss-name $SCALE_SET_NAME \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings "{\"username\":\"azureuser\", \"ssh_key\":\"$(cat ~/.ssh/id_rsa.pub)\"}"
# az vmss update-instances --instance-ids '*' \
--resource-group $CLUSTER_RESOURCE_GROUP \
--name $SCALE_SET_NAME
# kubectl get nodes -o wide
# az vm list --resource-group $CLUSTER_RESOURCE_GROUP -o table
# az vm list-ip-addresses --resource-group $CLUSTER_RESOURCE_GROUP -o table
# kubectl run --generator=run-pod/v1 -it --rm aks-ssh --image=debian
// Inside aks-ssh
apt-get update && apt-get install openssh-client -y
// Open another terminal then copy SSH key
# kubectl cp ~/.ssh/id_rsa $(kubectl get pod -l run=aks-ssh -o jsonpath='{.items[0].metadata.name}'):/id_rsa
// Inside aks-ssh again
#chmod 0600 id_rsa
// Connect to vmss/VM:
#ssh -i id_rsa azureuser#10.240.0.4
// -- in VMSS --
// Download Metricbeat(use deb)
# curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.5.0-amd64.deb
# sudo dpkg -i metricbeat-7.5.0-amd64.deb
// Modify metricbeat.yml:
# sudo nano /etc/metricbeat/metricbeat.yml
// Add below in “Elastic Cloud” section
cloud.id: "<--id-->"
cloud.auth: "<--auth-->"
// Enable Kubernetes
# sudo metricbeat modules enable kubernetes
// Modify kubernetes.yml
# sudo nano /etc/metricbeat/modules.d/kubernetes.yml
// Start Metricbeat
# sudo metricbeat setup
# sudo service metricbeat start

Related

Bash Script not runnuning properly in openstack server creation

I have an openstack server in which i want to create an instance with user data file for example
openstack server create --flavor 2 --image 34bf1632-86ed-46ca-909e-c6ace830f91f --nic net-id=d444145e-3ccb-4685-88ee --security-group default --key-name Adeel --user-data ./adeel/script.sh m3
script.sh contain
#cloud-config
password: mypasswd
chpasswd: { expire: False }
ssh_pwauth: True
#!/bin/sh
wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-7.17.7-linux-x86_64.tar.gz && tar -xzf elastic-agent-7.17.7-linux-x86_64.tar.gz cd
elastic-agent-7.17.7-linux-x86_64 sudo ./elastic-agent install \
--fleet-server-es=http://localhost:9200 \
--fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2Njc0MDM1 \
--fleet-server-policy=499b5aa7-d214-5b5d \
--fleet-server-insecure-http
when i add this script nothing executed. i want run above script when my instance boot first time.

Docker/K8 : OpenSSL SSL_connect: SSL_ERROR_SYSCALL

Running a k8 cronjob on an endpoint. Test works like a charm locally and even when I sleep infinity at the end of my entrypoint then curl inside the container. However once the cron kicks off I get some funky error:
[ec2-user#ip-10-122-8-121 device-purge]$ kubectl logs appgate-device-cron-job-1618411080-29lgt -n device-purge
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 52.61.245.214:444
docker-entrypoint.sh
#! /bin/sh
export api_vs_hd=$API_VS_HD
export controller_ip=$CONTROLLER_IP
export password=$PASSWORD
export uuid=$UUID
export token=$TOKEN
# should be logged in after token export
# Test API call: list users
curl -k -H "Content-Type: application/json" \
-H "$api_vs_hd" \
-H "Authorization: Bearer $token" \
-X GET \
https://$controller_ip:444/admin/license/users
# test
# sleep infinity
Dockerfile
FROM harbor/privateop9/python38:latest
# Use root user for packages installation
USER root
# Install packages
RUN yum update -y && yum upgrade -y
# Install curl
RUN yum install curl -y \
&& curl --version
# Install zip/unzip/gunzip
RUN yum install zip unzip -y \
&& yum install gzip -y
# Install wget
RUN yum install wget -y
# Install jq
RUN wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
RUN chmod +x ./jq
RUN cp jq /usr/bin
# Install aws cli
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
## set working directory
WORKDIR /home/app
# Add user
RUN groupadd --system user && adduser --system user --no-create-home --gid user
RUN chown -R user:user /home/app && chmod -R 777 /home/app
# Make sure that your shell script file is in the same folder as your dockerfile while running the docker build command as the below command will copy the file to the /home/root/ folder for execution
# COPY . /home/root/
COPY ./docker-entrypoint.sh /home/app
RUN chmod +x docker-entrypoint.sh
# Switch to non-root user
USER user
# Run service
ENTRYPOINT ["/home/app/docker-entrypoint.sh"]
Cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: device-cron-job
namespace: device-purge
spec:
#Cron Time is set according to server time, ensure server time zone and set accordingly.
schedule: "*/2 * * * *" # test
jobTemplate:
spec:
template:
spec:
imagePullSecrets:
- name: appgate-cron
containers:
- name: device-cron-pod
image: harbor/privateop9/python38:device-purge
env:
- name: API_VS_HD
value: "Accept:application/vnd.appgate.peer-v13+json"
- name: CONTROLLER_IP
value: "value"
- name: UUID
value: "value"
- name: TOKEN
value: >-
curl -H "Content-Type: application/json" -H "${api_vs_hd}" --request POST
--data "{\"providerName\":\"local\",\"username\":\"admin\",\"password\":\"$password\",\"deviceId\":\"$uuid\"}"
https://$controller_ip:444/admin/login --insecure | jq -r '.token'
- name: PASSWORD
valueFrom:
secretKeyRef:
name: password
key: password
imagePullPolicy: Always
restartPolicy: OnFailure
backoffLimit: 3
Please help! I am running out of ideas....
The issue with my post was on the server itself due to some firewall with the IP whitelisting set up on the AWS cloud account. After that problem was addressed by the security team on the account I was able to pass the blocker.

Method to install Elastic Search to Azure Kubernetes Services

I tried to run Elastic Search on Docker from my Mac successfully. However, I don't know how to find AKS's VM/Linux system to install Elastic Search. There is no specific guideline from this elastic document.
Supplement for steps that I performed:
# CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group XXX --name YYY --query nodeResourceGroup -o tsv)
# SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
# az vmss extension set \
--resource-group $CLUSTER_RESOURCE_GROUP \
--vmss-name $SCALE_SET_NAME \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings "{\"username\":\"azureuser\", \"ssh_key\":\"$(cat ~/.ssh/id_rsa.pub)\"}"
# az vmss update-instances --instance-ids '*' \
--resource-group $CLUSTER_RESOURCE_GROUP \
--name $SCALE_SET_NAME
# kubectl get nodes -o wide
# az vm list --resource-group $CLUSTER_RESOURCE_GROUP -o table
# az vm list-ip-addresses --resource-group $CLUSTER_RESOURCE_GROUP -o table
# kubectl run --generator=run-pod/v1 -it --rm aks-ssh --image=debian
// Inside aks-ssh
apt-get update && apt-get install openssh-client -y
// Open another terminal then copy SSH key
# kubectl cp ~/.ssh/id_rsa $(kubectl get pod -l run=aks-ssh -o jsonpath='{.items[0].metadata.name}'):/id_rsa
// Inside aks-ssh again
#chmod 0600 id_rsa
// Connect to vmss/VM:
#ssh -i id_rsa azureuser#10.240.0.4
// -- in VMSS --
// Download Metricbeat(use deb)
# curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.5.0-amd64.deb
# sudo dpkg -i metricbeat-7.5.0-amd64.deb
// Modify metricbeat.yml:
# sudo nano /etc/metricbeat/metricbeat.yml
// Add below in “Elastic Cloud” section
cloud.id: "<--id-->"
cloud.auth: "<--auth-->"
// Enable Kubernetes
# sudo metricbeat modules enable kubernetes
// Modify kubernetes.yml
# sudo nano /etc/metricbeat/modules.d/kubernetes.yml
// Start Metricbeat
# sudo metricbeat setup
# sudo service metricbeat start
If you want to setup Elastic on ubuntu or VM you can simply install deb package using dpkg or using apt-get install. you can configure ES service and start running it.
You can find more details over here : https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html
If you are looking for whole solution Elasticsearch, logstash and kibana you can follow : https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-16-04
https://linuxize.com/post/how-to-install-elasticsearch-on-ubuntu-18-04/
However if you are looking for solution to install on Kubernetes you can use helm chart.
https://github.com/elastic/helm-charts/tree/master/elasticsearch

How to create EC2 instance by docker-machine with exists key pair?

Use this way to create an instance on aws:
docker-machine create \
-d amazonec2 \
--amazonec2-region ap-northeast-1 \
--amazonec2-zone a \
--amazonec2-ami ami-XXXXXX \
--amazonec2-keypair-name my_key_pair \
--amazonec2-ssh-keypath ~/.ssh/id_rsa \
my_instance
Can't connect to it by ssh.
The my_key_pare is a name that exist on aws. The ~/.ssh/id_rsa is local ssh private key. How to set the right value?
I have read the document but didn't find an example of using both --amazonec2-keypair-name and --amazonec2-ssh-keypath.
Download the file from "Key Pairs" in AWS Console and place it in ~/.ssh.
Then run
docker-machine create \
-d amazonec2 \
--amazonec2-region ap-northeast-1 \
--amazonec2-zone a \
--amazonec2-ami ami-XXXXXX \
--amazonec2-ssh-keypath ~/.ssh/keypairfile \
my_instance
In gitlab-runner using Docker+machine, you have to provide both "amazonec2-keypair-name=XXX",
"amazonec2-ssh-keypath=XXX".
the Keypath should be like /home/gitlab-runner/.ssh/id_rsa and the path should also have id_rsa.pub file with id_rsa. These two file should not be your local generated key, should be the id_rsa, id_rsa.pub of Pem created
The following commands will do the trick,
cat faregate-test.pem > /home/gitlab-runner/.ssh/id_rsa
ssh-keygen -y -f faregate-test.pem > /home/gitlab-runner/.ssh/id_rsa.pub
And This will allow you to connect from runner manager instance to the runner provisioned by using your AWS existing keypair...

Build failed while appending line in source of docker container

I'm working on https://github.com/audip/rpi-haproxy and get this error message when building the docker container:
Build failed: The command '/bin/sh -c echo "deb http://httpredir.debian.org/debian jessie-backports main" >> /etc/apt/sources.list' returned a non-zero code: 1
This can be viewed at https://hub.docker.com/r/audip/rpi-haproxy/builds/brxdkayq3g45jjhppndcwnb/
I tried to find answers, but the problem seems to be something off on Line 4 of the Dockerfile. Need help to fix this build from failing.
# Pull base image.
FROM resin/rpi-raspbian:latest
# Enable Jessie backports
RUN echo "deb http://httpredir.debian.org/debian jessie-backports main" >> /etc/apt/sources.list
# Setup GPG keys
RUN gpg --keyserver pgpkeys.mit.edu --recv-key 8B48AD6246925553 \
&& gpg -a --export 8B48AD6246925553 | sudo apt-key add - \
&& gpg --keyserver pgpkeys.mit.edu --recv-key 7638D0442B90D010 \
&& gpg -a --export 7638D0442B90D010 | sudo apt-key add -
# Install HAProxy
RUN apt-get update \
&& apt-get install haproxy -t jessie-backports
# Define working directory.
WORKDIR /usr/local/etc/haproxy/
# Copy config file to container
COPY haproxy.cfg .
COPY start.bash .
# Define mountable directories.
VOLUME ["/haproxy-override"]
# Run loadbalancer
# CMD ["haproxy", "-f", "/usr/local/etc/haproxy/haproxy.cfg"]
# Define default command.
CMD ["bash", "start.bash"]
# Expose ports.
EXPOSE 80
EXPOSE 443
From your logs:
standard_init_linux.go:178: exec user process caused "exec format error"
It's complaining about an invalid binary format. The image you are using is a Raspberry Pi image, which would be based on an ARM chipset. Your build is running on an AMD64 chipset. These are not binary compatible. I believe this image is designed to be built on a Pi itself.

Resources