How to consume data in elasticsearch from a HTTP/REST API call - elasticsearch

I am working on a project where I need to create a Kibana dashboard with devops metrics.
We have multiple toolsets being used(Bitbucket, TeamCity, SonarQube, Nexus, Nolio).
The intention of the dashboard is to show a highlevel snapshot of the project/application health. This will include some details such as; change lead time, deployment frequency, mean time to recovery, change failure rate, Code quality, number of commits, etc
My question is this; all the above tool sets have a RESTful API exposed(or http/s for that matter), hence how do I consume these data returned by the API calls from the devops tools (or the UI page of these tools) and them inset them into elasticsearch for it to be later used by Kibana.
Installing logstash or beats on the servers where these devops services are running is not an option as this is a centralized for the organization and having a third party software installed here will need a lot of hopping around for approvals and processes.
Please let me know if anymore information is required from myside.

Related

Send email through elastic when error comes in log

I need to send email automatically whenever any error comes in my Elastic search.
Is there anyway to do it.
I dont want to use Elastic Cloud for it.
I can use Watcher in Kibana, but my question is whether the "Watcher" is available in local also along with cloud?
Please help!
Watcher is available in on-premises installations if you have at least a Gold License, it is not available with the free basic license.
The same thing for the Kibana e-mail action, it needs a Gold License.
You can check what is available at the subscription page.
If you do not have a Gold License for your on-premises cluster, you will need an external tool to query elasticsearch and send e-mails, you can build one using one of the official clients libraries (python, node.js, java etc) or you can try other tools like elastalert.

How integrate FireStore Health Check and Dashboard metrics with our internal Company systems

Context: it is my first use of FireStore. I want to use it to push notification status to our Mobile Application. I can see that there is Google Firestore Dashboard under Analytics umbrella. In our company we use mainly three tools for monitoring our applications: Zabbix, Dynatrace and certain internal solution based on Elasticsearch. I need to ntegrate our internal monitoring systems with metrics resulted from our first Firestore project.
What I am looking for: based on personal assumptions:
1) Maybe there might exist either some GET endpoints that a I can connect and poll for information let's say each minute
2) Maybe, following the idea of Database Realtime pushing events accross a long time connection, I can code a Spring Boot application that import Firebase SDK and every day I connect to some specific Firestore endpoint which will push any interested events (eg. delay based on custom logic or dead service)
3) Maybe some plugin I can connect straight to a Kafka hosted in our internal Datacent
4) Some plugin to connect from Firestore/Firebase to either third tools (eg. Zabbix or Dynatrace or Elasticsearch)
5) Some dependency I could import in google-cloud-funtions thiggered from Firestore Healcheck engine in orther to consume some internal end-point posting data
Perhaps there is already some approach universally used for a scenario when you have to connect Firestore to internal monitoring system. I will be highly appreciated if tell me that than I can narrow my googling searchs because I am not finding anything usefull.
Please, it is not part of this question comparing Monitoring approach. It is a very solid fact in our company use internal Dashboards and some custom alerts trigger. I just mentioned the names above to clarify what I mean by internal monitoring tools. The focus on this question is HOW IMPORT/INTEGRATE/OBSERVE/CONSUME Firestore monitoring data. Our internal stack is beyond this question.
Here is the Official Documentation for Cloud Monitoring using which you can collect metrics, events, and metadata from Google Cloud Platform products that you can use to create dashboards, charts, and alerts.
Please let me know if you have further questions.

How do I manage micro services with DevOps?

Say I have a front end node and three backed nodes tools, blog, and store. Each node communicates with the other. Each of these nodes have their own set of languages and libraries, and have their own Dockerfile.
I understand the DevOps lifecycle of a single monolithic web application, but cannot workout how a DevOps pipeline would work for microservices.
Would each micro-service get its own github repo and CI/CD pipeline?
How do I keep the versions in sync? Let's say the tools microservice uses blog version 2.3. But blog just got pushed to version 2.4, which is incompatible with tools. How do I keep the staging and production environments in sync onto which version they are supposed to rely on?
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest location of this service?
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices for developing locally with several different services?
Where can I go to learn more?
Would each micro-service get its own github repo and CI/CD pipeline?
From my experience you can do both. I saw some teams putting multiple micro-services in one Repository.
We where putting each micro-service in a separate repository as the Jenkins pipeline was build in a generic
way to build them that way. This included having some configuration files in specific directories like
"/Scripts/microserviceConf.json"
This was helping us in some cases. In general you should also consider the Cost as GitHub has a pricing model
which does take into account how many private repositories you have.
How do I keep the versions in sync? Let's say the tools micro-service uses blog version 2.3. But blog just got pushed to version 2.4, which
is incompatible with tools. How do I keep the staging and production
environments in sync onto which version they are supposed to rely on?
You need to be backwards compatible. Means if your blogs 2.4 version is not compatible with tools version 2.3 you will have high dependency
and coupling which is going again one of the key benefits of micro-services. There are many ways how you get around this.
You can introduce a versioning system to your micro-services. If you have a braking change to lets say an api you need to support
the old version for some time still and create a new v2 of the new api. Like POST "blogs/api/blog" would then have a new api
POST "blogs/api/v2/blog" which would have the new features and tools micro-service will have some brige time in which you support
bot api's so it can migrate to v2.
Also take a look at Semantic versioning here.
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest
location of this service?
I am not quite sure what you mean here. But this goes in the direction of micro-service orchestration. Usually your Cloud provider specific
service has tools to deal with this. You can take a look at AWS ECS and/or AWS EKS Kubernetes service and how they do it.
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices
for developing locally with several different services?
I would suggest to use docker and docker-compose to create your development setup. You would create a local development network of docker
containers which would represent your whole system. This would include: your micro-services, infrastructure(database, cache, helpers) and others. You can read about it more in this answer here. It is described in the section "Considering the Development Setup".
Where can I go to learn more?
There are multiple sources for learning this. Some are:
https://microservices.io/
https://www.datamation.com/applications/devops-and-microservices.html
https://www.mindtree.com/blog/look-devops-microservices
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/multi-container-applications-docker-compose

Microservice dependency manager tools

Is there a tool available to manage the microservice dependency.
For eg:- If there are service like Inventory service, Catalog service and identity service which together constitute product service.
Is there a visual tool which can map all the dependency and if any of the service is getting changed it should show what all other service is going to be effected by this.
While this questions was posted some years ago, there is now an open source tool called Ortelius.io that does microservice dependency mapping across clusters. It tracks and versions 'logical' views of the application, and shows what apps are dependent upon what services. Tracks this across all clusters with a full versioning engine.
https://github.com/ortelius/ortelius
I think your requirement is closely satisfied by Service map feature of New Relic which is an Application Performance Monitoring platform
Check out https://docs.newrelic.com/docs/using-new-relic/service-maps/get-started/introduction-service-maps
Service maps are visual, customizable representations of your application architecture.
Maps automatically show you your app's connections and dependencies, including databases and external services.
Health indicators and performance metrics show you the current operational status for every part of your architecture.
Well not exactly a dependency manager, if at all there is anything like that, but we made use of a tool called Pinpoint. Amongst it many features is one which shows all the services which are configured with pinpoint and how they interact with other services and databases.
It may help you find how services are linked, and you can infer what all services be impacted if you alter a given service.
It may be long shot, to get a whole apm set up just to find these dependecies, but if you starting from scratch, you may think around it.

real time number crunching and storage on cloud

I have some hardware devices that send some data that need to be stored on the cloud server and also I need to do some real time processing on them.
The data they send need to be preserved for months in some custom binary files. These files related to each device can grow in size up to 10GB over time.
There will client programs (mobile / web) that will be looking at the processed data at real time.
My prefered choice of language is C/C++/C#, since there is time sensitive number crunching involved.
Goal is write scalable application that can have thousands of such devices monitored on the cloud.
Do I have to upfront write the code for running on the cloud ( undestand Azure / amazon EC2) ? Can I write multi threaded desktop application and later migrate to cloud ?
I have used Message passing interface (MPI) in the past for clusters. Can I still use MPI ?
If I use microsoft azure API can I still host my software on Amazon cloud ?
For mobile devices to talk to the server, I understand that I need to have a webservice running. how can I convert a desktop program writeen in C++ / C# to act as a web service talking to client?
Are there any 3rd part frame works or tools taht can help me with my work ?
With most cloud compute services you can deploy an off-the-shelf server and install your own software on it. So, yes, you can write and test you application locally then migrate to the cloud once you get all the bugs worked out. Here are the available EC2 server configurations.
I have not tried MPI but you should be able to run just about anything you want on the servers in the cloud. However, Amazon does offer the Simple Queue Service which provides message passing in the cloud. Your software does not need to run in the cloud to use this service.
I have not used Azure. I doubt there are any restrictions regarding which external servers you use for storage and/or compute. However, keeping your cloud storage and compute resources within a single provider will reduce costs, improve performance and provide you with a unified management interface and billing system.
Web servers are fairly simple things. See this post. That took me about 10 seconds to find.
There is plenty of third party software out there. Figure out what you need in more detail and ask more specific questions

Resources