How integrate FireStore Health Check and Dashboard metrics with our internal Company systems - elasticsearch

Context: it is my first use of FireStore. I want to use it to push notification status to our Mobile Application. I can see that there is Google Firestore Dashboard under Analytics umbrella. In our company we use mainly three tools for monitoring our applications: Zabbix, Dynatrace and certain internal solution based on Elasticsearch. I need to ntegrate our internal monitoring systems with metrics resulted from our first Firestore project.
What I am looking for: based on personal assumptions:
1) Maybe there might exist either some GET endpoints that a I can connect and poll for information let's say each minute
2) Maybe, following the idea of Database Realtime pushing events accross a long time connection, I can code a Spring Boot application that import Firebase SDK and every day I connect to some specific Firestore endpoint which will push any interested events (eg. delay based on custom logic or dead service)
3) Maybe some plugin I can connect straight to a Kafka hosted in our internal Datacent
4) Some plugin to connect from Firestore/Firebase to either third tools (eg. Zabbix or Dynatrace or Elasticsearch)
5) Some dependency I could import in google-cloud-funtions thiggered from Firestore Healcheck engine in orther to consume some internal end-point posting data
Perhaps there is already some approach universally used for a scenario when you have to connect Firestore to internal monitoring system. I will be highly appreciated if tell me that than I can narrow my googling searchs because I am not finding anything usefull.
Please, it is not part of this question comparing Monitoring approach. It is a very solid fact in our company use internal Dashboards and some custom alerts trigger. I just mentioned the names above to clarify what I mean by internal monitoring tools. The focus on this question is HOW IMPORT/INTEGRATE/OBSERVE/CONSUME Firestore monitoring data. Our internal stack is beyond this question.

Here is the Official Documentation for Cloud Monitoring using which you can collect metrics, events, and metadata from Google Cloud Platform products that you can use to create dashboards, charts, and alerts.
Please let me know if you have further questions.

Related

ElasticSearch/ElasticCloud Alert Creation

I am a newbie in Elastic in general and currently I am trying to manage our alerts for CPU/Disk/Memory in Elastic Cloud. I can create the alerts manually just fine, but that takes a huge amount of time and if we migrate I want to be able to create the alerts in some automated way. In the past I have worked with Azure and created alerts with Az PowerShell and etc, so I am searching how to automate the alert creation for our infrastructure in Elastic Cloud. I went through the documentation for Alerts Link. But, im not sure I understand how to use the API to actually do this.
Is there a way to automate lets say creation of CPU alerts for 10 different hosts that we monitor with Elastic ? Is using the API the only way and are there any materials other than the official documentation that can help me achieve this? And am I even on the correct path? Thank you in advance.
Let me share knowledge of using Azure Monitor where you can connects the resources to Azure Monitor and manage the Alerts. Alerts can send you an email or call a web hook when some metric (for example database size or CPU usage) reaches the threshold. There are several ways to create Alerts- using Azure Portal, Command Line Interface, Powershell and Azure Monitor Rest API. Hope it will help you.
Even you can automate alerts using Azure Automation runbook with Mertic Alerts. where can automate the alerts according to the customized dimensional values and once the Alert criteria met it can even send an mail.

Generate report from Microsoft Application Insights data

Currently we use Microsoft Application Insights for performance tracking and it worked very well and we could easily grab the report/chart on Azure portal, the problem is that the application we are monitoring is for one of our clients and we don't want to share the Azure portal with them.
I know there is an AI API which could be used to grab data and do whatever we want, but is there any easy way to share AI data with client without letting them log into the AI portal in AZure?
Thanks.
Read-only Power BI dashboard may be the good option here. The steps for couple ways of achieving this integration are here. However, you may go even simpler route:
use an Export button in Analytics UI of Application Insights resource and choose "Power BI (M query)" as a target;
paste this query as a new data source in Power BI (of type "Blank Query");
authenticate to AI backend (that's the important part of making this Dashboard read-only, so no one can change the query to extract another data under the same account);
create visualization;
Another way entirely is to fork subset of the data into customer's AI Resource (AI SDK supports sending data over into several IKeys if necessary).
You could also use the API Key feature of application insights, and generate a read-only api key, and use the application insights REST API features to build a custom solution to do the queries and generate reports. this would let anyone with that API key see any telemetry in your app though.

Microservice dependency manager tools

Is there a tool available to manage the microservice dependency.
For eg:- If there are service like Inventory service, Catalog service and identity service which together constitute product service.
Is there a visual tool which can map all the dependency and if any of the service is getting changed it should show what all other service is going to be effected by this.
While this questions was posted some years ago, there is now an open source tool called Ortelius.io that does microservice dependency mapping across clusters. It tracks and versions 'logical' views of the application, and shows what apps are dependent upon what services. Tracks this across all clusters with a full versioning engine.
https://github.com/ortelius/ortelius
I think your requirement is closely satisfied by Service map feature of New Relic which is an Application Performance Monitoring platform
Check out https://docs.newrelic.com/docs/using-new-relic/service-maps/get-started/introduction-service-maps
Service maps are visual, customizable representations of your application architecture.
Maps automatically show you your app's connections and dependencies, including databases and external services.
Health indicators and performance metrics show you the current operational status for every part of your architecture.
Well not exactly a dependency manager, if at all there is anything like that, but we made use of a tool called Pinpoint. Amongst it many features is one which shows all the services which are configured with pinpoint and how they interact with other services and databases.
It may help you find how services are linked, and you can infer what all services be impacted if you alter a given service.
It may be long shot, to get a whole apm set up just to find these dependecies, but if you starting from scratch, you may think around it.

How to consume data in elasticsearch from a HTTP/REST API call

I am working on a project where I need to create a Kibana dashboard with devops metrics.
We have multiple toolsets being used(Bitbucket, TeamCity, SonarQube, Nexus, Nolio).
The intention of the dashboard is to show a highlevel snapshot of the project/application health. This will include some details such as; change lead time, deployment frequency, mean time to recovery, change failure rate, Code quality, number of commits, etc
My question is this; all the above tool sets have a RESTful API exposed(or http/s for that matter), hence how do I consume these data returned by the API calls from the devops tools (or the UI page of these tools) and them inset them into elasticsearch for it to be later used by Kibana.
Installing logstash or beats on the servers where these devops services are running is not an option as this is a centralized for the organization and having a third party software installed here will need a lot of hopping around for approvals and processes.
Please let me know if anymore information is required from myside.

Is Parse an adequate solution here?

I'm contemplating to use Parse as a platform for my app, as I'm trying to avoid creating and managing the cloud infrastructure myself.
For the sake of simplicity let's say that my app will hook into an Exchange Server and will need to leverage some hosted Machine Learning service to categorize my e-mail and report on insights found.
I'm assuming that Parse would store my core data, while the hosted ML will store the "Big Data" associated with processing for insights.
I'm also expecting my app to receive push notifications generated by the hosted ML service.
Does this sound like a plausible way to go about it and leverage Parse, or am I better off developing the backend myself?
I think parse.com is the right place for you requirements, because they have everything you need like storage of core data, push notifications, cloud module which can be integrated with heroku, social integration, user management functionalities.
They also have large set of client libraries for desktop and mobile apps (node,java,.net etc...) also they have libraries of embedded devices.
The biggest advantage is that everything is setup, and you are focused on software development not on infrastructure things. This is my opinion.
I've been experimenting with the above stack and so far was really impressed. Seems like a viable path forward. The Cloud Code capability of Parse is very solid, and easy to work with. If you want to run services outside of Parse code this us also possible : just issue REST calls.

Resources