I am new to prometheus, and so I am not sure if high availability is part of Prometheus data store tsdb. I am not looking into something like having two prometheus server instances scraping data from the same exporter as that has high chance of having two tsdb data store which are out of sync.
It really depends on your requirements.
Do you need highly available alerting on your metrics? Prometheus can do that.
Do you need a highly available monitoring system that contains the last few hours of data for operational triage? Two prometheus instances are pretty good for that too.
Do you need long-term storage of timeseries data? Prometheus is not designed to accomplish this on its own. Either use the remote write functionality of prometheus to ship data to another TSDB that supports redundant storage (InfluxDB and Clickhouse are pretty promising here) but you are on the hook for de-duping data. Alternatively, consider Cortex.
For Kubernetes setup Using kube-prometheus (prometheus-operator), you can configure it using values.
and including thanos Would help in this situation
There is prometheus-postgresql-adapter that allows you to use PostgreSQL / TimescaleDB as a remote storage. The adapter enables multiple Prometheus instances (HA setup) to write to a single remote storage, so you have one source of truth. Recently, I've published a blog post about it [How to manage Prometheus high-availability with PostgreSQL + TimescaleDB] (https://blog.timescale.com/blog/prometheus-ha-postgresql-8de68d19b6f5/).
Disclaimer: I am one of the engineers behind the adapter
Related
I use Prometheus to gather k8s' resources.
The resource data pipeline is as follows:
k8s -> Prometheus -> Java app -> Elasticsearch -> (whghl) Java app
Here I have a question.
Why use Prometheus?
Wouldn't Prometheus not be necessary if it was stored in DB like mine?
Whether I use Elasticsearch or MongoDB, wouldn't I need Prometheus?
It definitely depends on what exactly you are trying to achieve by using these tools. In general, the scope of usage is quite different.
Prometheus is specifically designed for metrics collection, system monitoring and creating alerts based on those metrics. That's why it is the better choice if it is primarily required to pull metrics from services and run alerts on them.
Elasticsearch in its turn is a system with wider scope, as it is used to store and search all data types, perform different types of analytics of this data - and mostly it is used as log analysis system. But it also can be configured for monitoring, though it is not particularly made for it, unlike Prometheus.
Both tools are good to use, but Prometheus provides more simplicity in setting up monitoring for Kubernetes.
I am configuring Prometheus to access Spring boot metrics data. For some of the metrics, Prometheus's pull mechanism is ok, but for some custom metrics I prefer push based mechanism.
Does Prometheus allow to push metrics data?
No.
Prometheus is very opinionated, and one of it's design decisions is to dis-allow push as a mechanism into Prometheus itself.
The way around this is to push into an intermediate store and allow Prometheus to scrape data from there. This isn't fun and there are considerations on how quickly you want to drain your data and how pass data into Prometheus with time-stamps -- I've had to override the Prometheus client library for this.
https://github.com/prometheus/pushgateway
Prometheus provides its own collector above which looks like it would be what you want but it has weird semantics around when it expires pushed metrics (it never does, only overwrites their value for a new datapoint with the same labels).
They make it very clear that they don't want it used for pushed metrics.
All in all, you can hack something together to get something close to push events.
But you're much better off embracing the pull model than fighting it.
Prometheus has added support for the push model recently. It is called as remote write receiver.
Link: https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver
From what I understand it only accepts POST request with protocol buffers and snappy compression.
Prometheus doesn't support push model. If you need Prometheus-like monitoring system, which supports both pull model and push model, then try VictoriaMetrics - the project I work on:
It supports scraping of Prometheus metrics - see these docs.
It supports data ingestion (aka push model) in Prometheus text exposition format - see these docs.
It supports other popular data ingestion formats such as InfluxDB line protocol, Graphite, OpenTSDB, DataDog, CSV and JSON - see these docs.
Additionally to this VictoriaMetrics provides Prometheus querying API and PromQL-like query language - MetricsQL, so it can be used as a drop-in replacement for Prometheus in most cases.
I plan to set up monitoring for Redmine, with the help of which I can see man-hours spent on tickets, time taken to complete a ticket etc to monitor the productivity of my team. I want to see all of these using Graphana. As of now I think using Prometheus and exposing the Metrics but not sure how. (Might have to create an exporter I think, but not sure if that would work). So basically how can this be possible?
A Prometheus exporter is simply an HTTP server that sits next to your target (Redmine in your case, although I have no experience with it) and whenever it gets a /metrics request it does one or more API calls to the target (assuming Redmine provides an API to query the numbers you need) and returns said numbers as Prometheus metrics with names, labels etc.
Here are the Prometheus clients (that help expose metrics in the format accepted by Prometheus) for Go and Java (look for simpleclient_http or simpleclient_servlet). There is support for many other languages.
Adding on to #Alin's answer to expose Redmine metrics to Prometheus. You would need to install an exporter.
https://github.com/mbeloshitsky/redmine_prometheus.git
Here is a redmine plugin available for prometheus.
You can get the hours and all the data you need through Redmine Rest APIs. Write a little program to fetch and update the data in Graphite or Prometheus. You can perform this task using sensu through creating a metric script in python,ruby or Perl. Next all you have to do is Plotting the graphs. Well thats another race :P
RedMine guide: http://www.redmine.org/projects/redmine/wiki/Rest_api_with_python
Could someone please let me know what does it mean by 'Big Data implementation over Cloud'
I have been using Amazon S3 to store data and query using hive, which I read is one of the cloud implementation. I would like to know what exactly does this mean and all possible ways to implement it.
Thanks,
Sree
Following are choices in the levels of services that a Cloud provider can offer for a Big Data analytics solution:
Data platform infrastructure service, such as Hadoop as a Service, that provides pre-installed and managed infrastructures. With this level of service, you are responsible for loading, governing, and managing the data and analytics for the analytics solution.
Data management service, such as a Data Lake Service, that provides data management, catalog services, analytics development, security, and information governance services on top of one or more data platforms. With this level of service, you are responsible for defining the policies for how data is managed and for connecting data sources to the cloud solution. The data owners have direct control of how their data is loaded, secured, and used. Consumers of data are able to use the catalog to locate the data they want, request access, and make use of the data through self-service interfaces.
Insight and Data Service, such as a Customer Analytics Service, that gives you the responsibility for connecting data sources to the cloud solution. The cloud solution then provides APIs to access combinations of your data and additional data sources, both proprietary to the solution and public open data, along with analytical insight generated from this data.
For more information regarding this, read the detailed article published by IBM here: http://www.ibm.com/developerworks/cloud/library/cl-ibm-leads-building-big-data-analytics-solutions-cloud-trs/index.html
Also take a look at the services provided by Qubole, which greatly simplifies, speeds and scales big data analytics workloads against data stored on AWS, Google, or Azure clouds - https://www.qubole.com/features.
Storing and processing big volumes of data
requires scalability plus availability.
Cloud computing delivers all these through hardware
virtualization. For the same reason, it is only logical that big data and cloud computing are
two compatible concepts as cloud enables big data to
be available, scalable and fault tolerant.
Not only that, the implementation does not stop there - many companies are now offering Big Data as A Service (BDaaS), such as Stratoscale, Cloudera and of course Azure and others.
I want to scale out my EC2 instances on AWS. For this I have been suggested to use the Sensu framwork.
I want to scale out the instance based on its CPU usage. For testing I have configured Sensu on both Windows and Ubuntu(V.Box), I'm running a client on Ubuntu by following this example. My CPU data is successfully passed to RabbitMQ.
Now I'm wondering how I can use that data in the Sensu server so that I can scale in or scale out? Any suggestion will be appreciated.
In case it matters, I will use this with Opscode Chef.
The easiest way to achieve your goal would be to connect the available components together (which will still require writing some code, see below) and refrain from adding custom solutions as much as possible:
Amazon EC2 offers Auto Scaling, which is in turn be driven by Metrics collected via Amazon CloudWatch. So metrics are key here, and that's exactly what Sensu is all about, see e.g. Sensu and Graphite, which covers two approaches for pushing metrics from Sensu to Graphite:
Remember: think of Sensu as the "monitoring router". While we are
going to show how to push metrics to Graphite, it is just as easy to
push metrics to any other system – Librato, Cube, OpenTSDB, etc. In
fact, it would not be difficult at all to push metrics to multiple
graphing backends in a fanout manner. [emphasis mine]
Your metrics are available in the Sensu server already, so you'll need to push them into CloudWatch now (just like explained for Graphite in the article above) and attach respective Auto Scaling policies to these in turn.
The currently available metrics handlers for Sensu are targeting Graphite and Librato indeed, so you'd need to implement such a Sensu Handler for Publishing Custom Metrics into CloudWatch (be sure to share it, it will definitely be widely used over time :)
Good luck!