How to access/save istio logs in windows 10? - metrics

I installed istio version 1.7.3 on windows 10 using Docker Desktop 3.2.2 and successfully run the sample bookinfo demo. I started prometheus to see http logs when sending requests to localhost/productpage. I want to be able to save those logs in my host machine (windows 10). Additionally I want to save istio metrics and logs components (mixer, citadel, envoy, etc.) to windows too, how can I achieve such a thing? and the path /dev/null specified in here it concerns the path for the pod itself?
EDIT:
the following documentation in istio v 1.1 uses fluentd, elasticsearch and kibana to collect logs, how can I apply this to istio v 1.7 ?

The Logging with Mixer and Fluentd sais that Istio 1.7 is the last one that support Mixer.
Mixer is deprecated. The functionality provided by Mixer is being
moved into the Envoy proxies. Use of Mixer with Istio will only be
supported through the 1.7 release of Istio.
So basically, if your Istio < 1.8, you still can use Istio Logging with Mixer and Fluentd and configure logs through Fluentd / Elasticsearch / Kibana stack.
Starting from version 1.8 Mixer is no longer supported in Istio

Related

Initiating Nifi GUI when installed on Google Cloud Compute Engine

I have reviewed this post but it is not helping:
Installing Nifi on Google Cloud Compute Engine
Here is what I have already done:
installed java 11 on ubuntu lts
installed nifi on ubuntu lts
This is what i have when i start the nifi.sh :
Java home: /usr/lib/jvm/java-1.11.0-openjdk-amd64
NiFi home: /usr/lib/nifi
Bootstrap Config File: /usr/lib/nifi/conf/bootstrap.conf
I have even tried to edit the nifi.properties by editing nifi.web.http.host and nifi.web.port
nifi.web.host=MY external ip from GCE
nifi.web.port=8080
I have even adjusted the Firewall settings and added port 8080(tcp) any my ip in IPRanges.
When I try to start the NIFI GUI it just does NOT load.
Can you please help me with that?
As per this doc can you cross check whether it is installed properly . Seems to be the default port for nifi is 8443. As you said changed to 8080, make sure that this port is not running in any other service and also there is no firewall blocking for this port.
open a web browser and navigate to https://localhost:8443/nifi or replace the port that you have changed and give it a try.
As per this SO, cross check whether the logs are listening or not by using logs/nifi-app.log. Share the screenshot if you are getting any errors.
Refer to this similar kind of issue : link1, link2, link3

Access GCP Managed Prometheus metrics from Grafana on Windows

I have installed Grafana (running at localhost:3000) and Prometheus (running at localhost:9090) on Windows 10, and am able to add the latter as a valid data source to the former. However, I want to create Grafana dashboards for data from Google's Managed Prometheus service. How do I add Google's Managed Prometheus as a data source in Grafana, running on Windows 10? Is there a way to accomplish this purely with native Windows binaries, without using Linux binaries via Docker?
I've not done this (myself yet).
I'm also using Google's (very good) Managed Service for Prometheus.
It's reasonably well-documented Managed Prometheus: Grafana
There's an important caveat under Authenticating Google APIs: "Google Cloud APIs all require authentication using OAuth2; however, Grafana doesn't support OAuth2 authentication for Prometheus data sources. To use Grafana with Managed Service for Prometheus, you must use the Prometheus UI as an authentication proxy.
Step #1: use the Prometheus UI
The Prometheus UI is deployed to a GKE cluster and so, if you want to use it remotely, you have a couple of options:
Hacky: port-forward
Better: expose it as a service
Step #2: Hacky
NAMESPACE="..." # Where you deployed Prometheus UI
kubectl port-forward deployment/frontend \
--namespace=${NAMESPACE} \
${PORT}:9090
Step #3: From the host where you're running the port-forward, you should now be able to configure Grafana to use the Prometheus UI datasource on http://localhost:${PORT}. localhost because it's port-forwarding to your (local)host and ${PORT} because that's the port it's using.
Now we can connect gcp prometheus directly from grafana using service account. Feature available from version 9.1.X
I have tested gmp with standalone Grafana on GKE it is working well as expected.
https://grafana.com/docs/grafana/latest/datasources/google-cloud-monitoring/google-authentication/

secure Kibana and elasticsearch using SSL / TLS

Thanks for taking the time to read this :)
My web app (grimoirelab) contains multiple services spun up using docker-compose which contains elasticsearch and kibana . Port 5601 (kibana) is open and accessible through the web.
I want to enable SSL / TLS in the Kibana container , i.e , change the URL from http to https
Kibana and Elasticsearch are both of Version : 6.8.6
I have very less experience in web security so would really appreciate any guidance on the same...
You can follow this elasticsearch documentation for configuration of SSL and TLS, and it is available as free after 6.8 version.
Please check Configuring SSL, TLS, and HTTPS to secure Elasticsearch, Kibana, Beats, and Logstash blog.
Please check this documentation for how to setup SSL and TLS with Elasticsearch Docker Container.

Kubernetes and Prometheus not working together with Grafana

I have created a kubernetes cluster on my local machine with one master and at the moment zero workers, using kubeadm as the bootstrap tool. I am trying to get Prometheus (from the helm packet manager) and Kuberntes matrics together to the Grafana Kubernetes App, but this is not the case. The way I am setting up the monitoring is:
Open grafana-server at port 3000 and install the kuberntes app.
Install stable/prometheus from helm and using this custom YAML file I found in another guide.
Adding Prometheus data source to Grafana with IP from kubernetes Prometheus service (or pods, tried both and both works well) and use TLS Client Auth.
Starting proxy port with kubectl proxy
Filling in all information needed in the Kubernetes Grafana app and then deploy it. No errors.
All kubernetes metric shows, but no Prometheus metric.
If kubernetes proxy connection is stopped, Prometheus metric can be seen. There are no problems connecting to the Prometheus pod or service IP when kubernetes proxy is running. Does someone have any clue what I am doing wrong?

Packetbeat dashboard for Application logs

Can packetbeat is used to monitor the tomcat server logs and windows logs?? or it will only monitor the database i.e., network monitoring?
Packetbeat only does network monitoring. But you can use it together with Logstash or Logstash-Forwarder to get visibility also into your logs.
It will do only network monitoring. you can use ELK for tomcat server logs.
#tsg is correct but now with the Beats 1.x release they are deprecating Logstash Forwarder in lieu of another Beat called Filebeat. Also they added Topbeat, which allows you to monitor server load and processes in your cluster.
See:
* https://www.elastic.co/blog/beats-1-0-0
You will likely want to install the package repo for your OS, then install each with:
{package manager cmd} install packetbeat
{package manager cmd} install topbeat
{package manager cmd} install filebeat
They each are installed in common directories. For example with Ubuntu (Linux) the config files are in /etc/<beat name>/<beat name>.yml where beat name is one of the 3 above. Each file are similar and you can disable the direct ES export and instead export to Logstash (comment ES and uncomment Logstash) and then add a beats import in your Logstash config. From thereon, Logstash listens for any beats over that port and can redistribute (or queue) using the [#metadata][beat] param to tell where it came from.
Libbeat also provides a framework to build your own so you can send any data you want to Logstash and it can queue and/or index. ;-)
Packetbeat is used mainly for network analysis . It currently supports following protocols :
ICMP (v4 and v6)
DNS
HTTP
Mysql
PostgreSQL
Redis
Thrift-RPC
MongoDB
Memcache
However , for visualizing tomcat logs you can configure them to use log4j and then configure logstash to take input from log4j and then using elasticsearch and kibana to visualise the logs.
To monitor windows logs you can use another beats platform Winlogbeat.

Resources