I read the documentation , though I would like to gain more knowledge and usage of Influx DB how much worth before start using.
Can some one explain in detail on the below questions.
1.What is the use of Influx Db backed listener in jmeter.
2.What is the difference Influx Db backed listener Vs Graphs Generator?
3.What are steps involved installation and configuration of Influx Database on Windows?
4.Along with the Influx Db do we need to install and configure anything else?
5.How can we send the whole dashboard to the team generated from the Influx db.
6.I appreciate If you provide the detailed steps involved from #1 to # 5.
Thanks,
Raj
InfluxDB is a time-series db (a light-weight db used to store time dependant data such as a Performance Test).
Using InfluxDB along with Grafana you can monitor certain test metrics live during a JMeter test and can also configure other system metrics to be collected and monitored (cpu/network/memory).
To store data into InfluxDB, you need to configure the Graphite configuration within JMeter (see Real-Time results). Then you can add a Backend listener to throw this into the DB.
For InfluxDB installation on Windows read this answer.
As for the Dashboard, I guess you need to use Grafana to see the expected test live metrics in a graphical format.
Related
The connection to InfluDb is correct, I can connect via CLI and I can send Jmeter data to InfluxDb correctly, I can even see it in influxDb, but Grafana Cloud does not plot graphs of InfluDb data.
I've researched everything and I can't find a way to debug to understand what's going wrong.
InfluxDB working correctly:
Grafana Data Source Settings:
I am new to jmeter Hbase load testing, i have install all plugins related to Hbase load testing, can any one help what exactly i need to do load testing on Hbase and what all need to capture the details related to Hbase load testing.
what are the scenarios i need to execute and how i can proceed the Hbase load testing.
JMeter doesn't support HBase testing out of the box, first of all you will need to install Hadoop/HBase Testing plugins, you can do this using JMeter Plugins Manager
what are the scenarios i need to execute - we don't know, you mention load testing and it means that you should put your HBase instance under anticipated load, in other words your test must represent real life HBase usage. You should ask your team for the details. If no one knows or unwilling to share the information you can execute the real life scenario using your system user interface or API or whatever and check HBase logs to get the relevant queries from there.
how i can proceed the Hbase load testing - given you have the plugin and the queries you can use the relevant HBase test elements, most probably HBase Connection Config is a must and HBase CRUD Sampler will cover 99% of your needs. You might also be interested in How to Load Test HBase with JMeter article
I would add a Metric in use Grafana, in a ruby project.
What are the parameters?, What gem can I use?
Are there a manual?
You should first look into Datasources for Grafana. http://docs.grafana.org/features/datasources/ Datasources are the Programs Grafana can interact with to generate a Graph so you need to install one of them on some device. Grafana itself does not store any data, it "just" creates queries to a Datasource and renders the data.
There are a lot of possible Datasources for Grafana as you can see. Commonly used are Graphite (my favourite) and InfluxDB (easy setup) but a standard SQL could also be the way to go for you. When researching the possible Datasources you can also search for Ruby Gems. I found one for InfluxDB, maintained by Influxdata itself https://github.com/influxdata/influxdb-ruby
I would like to expose the data table from my oracle database and expose into apache kafka. is it technicaly possible?
As well i need to stream data change from my oracle table and notify it to Kafka.
do you know good documentation of this use case?
thanks
You need Kafka Connect JDBC source connector to load data from your Oracle database. There is an open source bundled connector from Confluent. It has been packaged and tested with the rest of the Confluent Platform, including the schema registry. Using this connector is as easy as writing a simple connector configuration and starting a standalone Kafka Connect process or making a REST request to a Kafka Connect cluster. Documentation for this connector can be found here
To move change data in real-time from Oracle transactional databases to Kafka you need to first use a Change Data Capture (CDC) proprietary tool which requires purchasing a commercial license such as Oracle’s Golden Gate, Attunity Replicate, Dbvisit Replicate or Striim. Then, you can leverage the Kafka Connect connectors that they all provide. They are all listed here
Debezium, an open source CDC tool from Redhat, is planning to work on a connector that is not relying on Oracle Golden Gate license. The related JIRA is here.
You can use Kafka Connect for data import/export to Kafka. Using Kafka Connect is quite simple, because there is no need to write code. You just need to configure your connector.
You would only need to write code, if no connector is available and you want to provide your own connector. There are already 50+ connectors available.
There is a connector ("Golden Gate") for Oracle from Confluent Inc: https://www.confluent.io/product/connectors/
At the surface this is technically feasible. However, understand that the question has implications on downstream applications.
So to comprehensively address the original question regarding technical feasibility, bear in mind the following:
Are ordering/commit semantics important? Particularly across tables.
Are continued table changes across instance crashes (Kafka/CDC components) important?
When the table definition changes - do you expect the application to continue working, or will resort to planned change control?
Will you want to move partial subsets of data?
What datatypes need to be supported? e.g. Nested table support etc.
Will you need to handle compressed logical changes - e.g. on update/delete operations? How will you address this on the consumer side?
You can consider also using OpenLogReplicator. This is a new open source tool which reads Oracle database redo logs and sends messages to Kafka. Since it is written in C++ it has a very low latency like around 10ms and yet a relatively high throughput ratio.
It is in an early stage of development but there is already a working version. You can try to make a POC and check yourself how it works.
I have a number of applications that are running in different data centers, developed and maintained by different vendors. Each application has a web service that exposes relevant log data (audit data, security data, data related to cost calculations, performance data, ...) consolidated for the application.
My task is to get data from each system into a setup of Elasticsearch, Kibana and Logstash so I can create business reports or just view data the way I want to.
Assume I have a JBoss application server for integration to these "expose log" services, what is the best way to feed Elasticssearch? Some Logstash plugin that calls each service? JBoss uses some Logstash plugin? Or some other way?
The best way is to set up the logstash shipper on the server where the logs are created.
This will then ship them to a Redis server.
Another logstash instance will then pull the data from Redis, and index it, and ship it to Elasticsearch.
Kibana will then provide an interface to Elasticsearch, which is where the goodness happens.
I wrote a post on how to install Logstash a little while ago. Versions may have been updated since, but its still valid
http://www.nightbluefruit.com/blog/2013/09/how-to-install-and-setup-logstash/
Do your JBoss application server writes logs to file?
In my experiences, My JBoss application(in multiple server) writes the logs to the file. Then I use logstash to read the logs file and ship all the logs to a central server. You can refer to here.
So, what can you do is setup a logstash shipper in different data center.
If you do not have permission to do this, maybe you want to write a program to get the logs from different web services and then save them to a file. Then setup the logstash to read the logs file. So far, logstash do not have any plugin that can call web services.