ELK MetricBeat - Monitor remote mysqlDB - elasticsearch

I have installed ELK stack (With metricbeat) on ServerA and want to monitor mysql on ServerB. I added db host details on ServerA mysql.yml metribeat module file (/etc/metricbeat/modules.d/mysql.yml)
- module: mysql
metricsets:
- status
- performance
period: 10s
hosts: ["tcp(ServerB:3306)/"]
username: mysql
password:password
After I start the metricbeat instead of connecting to ServerB it tries to connect to localhost(ServerA) mysql.
below is the error
Error fetching data for metricset mysql.status: Error 1045: Access denied for user 'mysql'#'ServerA...sing password: YES)
Can someone help me with this?

It was an access issue once I grant required privileges I was able to connect.

Related

Ansible having problem authenticating with Google Cloud Platform

We are using Ansible to deploy an image to Google Kubernetes Cluster (GKE).
We have setup Ubuntu 20.04 and Python 3.8.5.
playbook.main.yml:
---
- hosts: localhost
vars:
k8s_file_path: /home/pesinn/Documents/...
become: yes
become_method: sudo
roles:
- k8s
main.yml:
- name: First Deployment
k8s:
kubeconfig: /home/pesinn/.kube/config
src: "{{k8s_file_path}}/deployment.yml"
When trying to deploy the image defined in deployment.yml file, by running the playbook, we get this error:
kubernetes.config.config_exception.ConfigException: cmd-path: process returned 1
Cmd: /home/pesinn/y/google-cloud-sdk/bin/gcloud config config-helper --format=json
Stderr: WARNING: Could not open the configuration file: [/root/.config/gcloud/configurations/config_default].
ERROR: (gcloud.config.config-helper) You do not currently have an active account selected.
Please run:
$ gcloud auth login
What we've already done
Initialized the cloud: gcloud init
Logged in and chosen a project gcloud auth login
Run export GOOGLE_APPLICATION_CREDENTIALS="path_to_service_account_key.json"
Run gcloud container clusters get-credentials {gke_project} --region {region}
Run the playbook sudo ansible-playbook playbook.main.yml -vvv
Run gcloud config config-helper --format=json on the local machine without any problems
What is very strange here is that we're logged in for sure. We can access the GKE cluster through kubectl command on the local machine. However, Ansible complains about us not being logged in. Also, in the error logs, we see that it is trying to open /root/.config/gcloud/configurations/config_default. Our default config file is, on the other hand, located in the home folder.
This error occurs randomly. Sometimes Ansible can detect our login and deploys the image, but sometimes it gives us this error. Both scenarios can happens without any code changes being made.
For some reason, ansible does not use GCP's default environment variables for authentication.
You can set
GCP_AUTH_KIND
GCP_SERVICE_ACCOUNT_EMAIL
GCP_SERVICE_ACCOUNT_FILE
GCP_SCOPES
GCP_SERVICE_ACCOUNT_FILE is the equivalent of GOOGLE_APPLICATION_CREDENTIALS
Reference: https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html#providing-credentials-as-environment-variables

Filebeat over HTTPS

I am totally newbie in elk but I'm currently deploying ELK stack via docker-compose (https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html TLS part).
Elasticsearch and Kibana work correctly in HTTPS.
However, I don't understand how to enable Filebeat over HTTPS. I would like to send my nginx logs which is located on another server (over internet, so I do not want to send logs in clear text). Everything works fine in HTTP but when I switch to HTTPS and reload Filebeat I get the following message:
Error: ... Get https://10.15.0.12:9200: x509: certificate is valid for 127.0.0.0.1, not 10.15.0.12
I know I'm doing something wrong but I don't find the answer for Filebeat over HTTPS...
Here is my Filebeat configuration :
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["10.15.0.12:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "https"
# Authentication credentials - either API key or username/password.
username: "elastic"
password: "myelasticpassword"
Thanks in advance.
I found the error :
My self signed certificate was for 127.0.0.1 host.
I've changed the IP in the instances.yml
Then I changed my filebeat config :
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["xx.xx.xx.xx:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "mypassword"
ssl.verification_mode: none

Connecting Laravel to GAE Cloud SQL Database with PostgreSQL

I am deploying a new application to Google App Engine Flex environment. The app is powered by Laravel.
I am able to connect to the database on my local machine, however, I am unable to connect to the database once I deploy my application.
What information do I need to include in the app.yaml file to connect to the database? Do I require any other information in a different file?
"message": "SQLSTATE[08006] [7] could not connect to server: Connection refused\n\tIs the server running on host \"localhost\" (::1) and accepting\n\tTCP/IP connections on port 5432?\ncould not connect to server: Connection refused\n\tIs the server running on host \"localhost\" (127.0.0.1) and accepting\n\tTCP/IP connections on port 5432? (SQL: select * from \"users\" where \"email\" = mitchell#efficialtec.com and \"users\".\"deleted_at\" is null limit 1)",
This is the contents of my .ENV file which allows me to connect locally
APP_ENV=development
APP_KEY=base64:B0G3Yr82fWO7xw8LrvcOC19DGUAEd32loJlPHCfP2sg=
APP_DEBUG=true
LOG_CHANNEL=stack
DB_CONNECTION=pgsql
DB_HOST=127.0.0.1
DB_POST=5432
DB_DATABASE=DATABASE
DB_USERNAME=USERNAME
DB_PASSWORD=PASSWORD
DB_SOCKET: "/cloudsql/CONNECTION_NAME"
This is the contents of my app.yaml file:
env: flex # let app engine know we use flexible environment
service: SERVICENAME
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
cpu_utilization:
target_utilization: 0.8
runtime_config:
document_root: public
skip_files:
- .env #we want to skip this to make sure we don’t mess stuff up on the server
env_variables:
# Put production environment variables here.
APP_ENV: development
APP_DEBUG : true # or false
APP_KEY: base64:B0G3Yr82fWO7xw8LrvcOC19DGUAEd32loJlPHCfP2sg=
APP_LOG: daily
APP_TIMEZONE: UTC #your timezone of choice
# Replace USER, PASSWORD, DATABASE, and CONNECTION_NAME with the
# values obtained when configuring your Cloud SQL instance.
POSTGRES_USER: USERNAME
POSTGRES_PASSWORD: PASSWORD
POSTGRES_DSN: pgsql:dbname=bnsw;host=/cloudsql/CONNECTION_NAME"
DB_HOST: localhost
DB_DATABASE: DATABASE
DB_USERNAME: USERNAME
DB_PASSWORD: PASSWORD
DB_SOCKET: "/cloudsql/CONNECTION_NAME"
beta_settings:
cloud_sql_instances: "CONNECTION_NAME"
Any help on this would be greatly appreciated, I have been looking on Google for almost a week with no answers yet!
By default, Laravel does not have DB_SOCKET under pgsql connection.
Thus, in order to connect to cloud SQL connection from app engine, you env show look like :
DB_CONNECTION=pgsql
DB_HOST=/cloudsql/CONNECTION_NAME
DB_DATABASE=DATABASE
DB_USERNAME=USERNAME
DB_PASSWORD=PASSWORD
Also please, don't include DB_PORT in the database env variables.

Shipping Logs Securely to a Remote Process Group using MiNiFi

I am having a little bit of challenge with NiFi…MiNiFi precisely. We use MiNiFi to ship logs from remote systems to a NiFi Instance, from there to Kafka and into Elasticsearch. We can successfully do this without https, However, recently I was tasked to do same securely using https.
Using certificates, I can connect to the NiFi UI, the challenge is that MiNiFi is unable to connect to the RPG on the remote NiFi with the error "Unable to communicate with Remote NiFi at URI https://xxxx.com:9443/nifi due to: Received fatal alert: handshake_failure" . I suspect this is because of the errors below
2018-07-23 16:27:23,083 INFO [main] o.apache.nifi.controller.FlowController Not enabling RAW Socket Site-to-Site functionality because nifi.remote.input.socket.port is not set
2018-07-23 16:27:23,083 INFO [main] o.apache.nifi.controller.FlowController Not enabling HTTP(S) Site-to-Site functionality because the 'nifi.remote.input.http.enabled' property is not true
I have tried to set these properties in the nifi.properties file of MiNiFi, but the file is always overwritten at each restart with default values loaded.
Please, do you have any ideas on how to resolve this?
How can I bootstrap these settings at startup in the config.yml file or any other place?
You'll need to set those in the original flow that you export from NiFi to MiNiFi. The nifi.properties of the MiNiFi instance is automatically generated from the provided config.yml file. That file is generated by using the MiNiFi Converter Toolkit to convert the exported template XML file.
For more, you can watch these videos or read the Getting Started Guide.
You'll want to look for lines like the following in the config.yml:
Security Properties:
keystore: /tmp/ssl/localhost-ks.jks
keystore type: JKS
keystore password: localtest
key password: localtest
truststore: /tmp/ssl/localhost-ts.jks
truststore type: JKS
truststore password: localtest
ssl protocol: TLS
Sensitive Props:
key:
algorithm: PBEWITHMD5AND256BITAES-CBC-OPENSSL
provider: BC
Remote Processing Groups:
- name: http://localhost:8080/nifi
url: http://localhost:8080/nifi
comment: ''
timeout: 30 sec
yield period: 10 sec
Input Ports:
- id: AUTOGENERATED_NIFI_PORT_ID_HERE
name: MiNiFi-input
comment: ''
max concurrent tasks: 1
use compression: false
Properties: # Deviates from spec and will later be removed when this is autonegotiated
Port: 1026
Host Name: localhost

Setting up ELK stack

I'm completely new to ELK and trying to install the stack with some beats for our servers.
Elasticsearch, Kibana and Logstash are all installed (on server A). I followed this guide here https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html.
Filebeat template was installed as well.
I also installed filebeat on another server (server B), and was trying to test the connection
$ /usr/share/filebeat/bin/filebeat test output -c
/etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -
path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs
/var/log/filebeat
logstash: my-own-domain:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 163.172.167.147
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.2
dial up... OK
talk to server... OK
Things seems to be ok, yet data from filebeat on server B doesn't seem to be sending data to logstash.
Accessing Kibana keeps redirecting me back to Create Index pattern, with the message
Couldn't find any Elasticsearch data
Any direction pointing would be really appreciated.
Can you check your filebeat.yml file and see if configuration for logs are activated :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log

Resources