I have batched DB queries in my program and would like to measure the average time for batched queries to finish based on batch size, currently I have
rate(batched_query_duration_seconds_sum{job="myprogram"}[5m]) / rate(batched_query_duration_seconds_count{job="myprogram"}[5m])
that metric has the label batch_size, I think the correct query would be something like
rate by (batch_size) (batched_query_duration_seconds_sum{job="myprogram"}[5m]) / rate by (batch_size) (batched_query_duration_seconds_count{job="myprogram"}[5m])
but it's syntax error. How can I do that? Thanks.
Try the following query:
sum(rate(batched_query_duration_seconds_sum{job="myprogram"}[5m])) by (batch_size) / sum(rate(batched_query_duration_seconds_count{job="myprogram"}[5m])) by (batch_size)
Related
I am working with spring data mongo, I have around 2000 documents stored(would probably reach 10000 in the upcoming 2-3 months), I would like to extract them all, however the query takes around ~2.5 seconds, which is pretty bad in my opinion, I am using MongoRepository default - findAll()
Tried to increase the cursor batchsize to 500,1000,2000 without any much improvement(best result was 2.13 seconds).
Currently I'm using a workaround - I store the documents in a different collection which used for cache, extracting this data takes around 0.25 seconds, but I would like to figure out how to fix the original query execution time.
Would like the answer will return in less then 1 sec, less is even better.
Without knowing the exact details i cannot confirm you a method.
But for data selection queries "Indexing" will help you.
Please Try Indexing the DB.
https://docs.mongodb.com/manual/indexes/
We are working with a Vertica 8.1 table containing 500 columns and 100 000 rows.
The following query will take around 1.5 seconds to execute, even when using the vsql client straight on one of the Vertica cluster nodes (to eliminate any network latency issue) :
SELECT COUNT(*) FROM MY_TABLE WHERE COL_132 IS NOT NULL and COL_26 = 'anotherValue'
But when checking the query_requests table, the request_duration_ms is only 98 ms, and the resource_acquisitions table doesn't indicate any delay in resource asquisition. I can't understand where the rest of the time is spent.
If I then export to a new table only the columns used by the query, and run the query on this new, smaller, table, I get a blazing fast response, even though the query_requests table still tells me the request_duration_ms is around 98 ms.
So it seems that the number of columns in the table impacts the execution time of queries, even if most of these columns are not referenced. Am I wrong ? If so, why is it so ?
Thanks by advance
It sounds like your query is running against the (default) superprojection that includes all tables. Even though Vertica is a columnar database (with associated compression and encoding), your query is probably still touching more data than it needs to.
You can create projections to optimize your queries. A projection contains a subset of columns; if one is available that has all the columns your query needs, then the query uses that instead of the superprojection. (It's a little more complicated than that, because physical location is also a factor, but that's the basic idea.) You can use the Database Designer to create some initial projections based on your schema and sample queries, and iteratively improve it over time.
I was running Vertica 8.1.0-1, it seems the issue was a Vertica bug in the Vertica planning phase causing a performance degradation. It was solved in versions >= 8.1.1 :
[https://my.vertica.com/docs/ReleaseNotes/8.1.x/Vertica_8.1.x_Release_Notes.htm]
VER-53602 - Optimizer - This fix improves complex query performance during the query planning phase.
When I query using HeidiSql, the console will give the info like this:
/* Affected rows: 0 Found rows: 2,632,206 Warnings: 0 Duration for 1 query: 0.008 sec. (+ 389.069 sec. network) */
I want to use JDBC to do the performance testing on our database.
So distinguish the network cost and the actual query cost is important in my case.
How to get the network cost in one MariaDB query using JDBC? Is it possible?
In HeidiSQL, I am defining the query duration as the time which mysql_real_query took to execute.
That "network" duration is the time which mysql_store_result takes afterwards.
See also:
https://mariadb.com/kb/en/mariadb/mysql_real_query/
https://mariadb.com/kb/en/mariadb/mysql_store_result/
I guess JDBC has similar methods as the C API, so I guess the above mentioned logic from HeidiSQL should be easy to adapt.
I am perplexed at this point. I spent a day or three in the deep end of Influx and Grafana, to get some graphs plotted that are crucial to my needs. However, with the last one I need to total up two metrics (two increment counts, in column value). Let's call them notifications.one and notifications.two. In the graph I would like them displayed, it would work well as a total of the two, a single graph line, showing (notifications.one + notifications.two) instead of two separate ones.
I tried with the usual SELECT sum(value) from the two, but I don't get any data from it (which does exist!). There is also merge() mentioned in the documentation of Influx, but I cannot get this to work either.
The documentation for merge requires something like:
SELECT mean(value) FROM /notifications.*/ WHERE ...
This also, comes back as a flat zero line.
I hope my question carries some weight, since I have far from enough knowledge to convey the problem as good as possible.
Thank you.
With InfluxDB 0.12 you can write:
SELECT MEAN(usage_system) + MEAN(usage_user) + MEAN(usage_irq) AS cpu_total
FROM cpu
WHERE time > now() - 10s
GROUP BY host;
These features are not really documented yet, but you can have a look at supported mathematical operators.
In InfluxDB 0.9 there is no way to merge query results across measurements. Within a measurement all series are merged by default, but no series can be merged across measurements. See https://influxdb.com/docs/v0.9/concepts/08_vs_09.html#joins for more detail.
A better schema for 0.9 is instead of two measurements: notifications.one and notifications.two, have one measurement notifications with foo=one and foo=two as tags on that single measurement. Then the query for the merged values is just SELECT MEAN(value) FROM notifications and the per-series query is then SELECT MEAN(value) FROM notifications GROUP BY foo
I think as per the question its possible to club queries together just like nested queries in RDBMS. This can be achieved using Continous Queries in influxdb. This documentation explains it clearly.
Basically you need to create a query from other queries and then use this newly created query to fetch the series.
https://docs.influxdata.com/influxdb/v1.1/query_language/continuous_queries/#substituting-for-nested-functions
I have been switching from statsd + graphite + grafana to using influxdb instead of graphite. However somehow InfluxDB behaves a bit differently than graphite used to when it comes to missing values.
If a timeseries does not produce new points for a period of time, the plot in Grafana will continue to show the last value written:
This happens even when specifying fill(0) or fill(null) in the query. When using the Data Interface of InfluxDB it also seems to be filling using the previous values:
Since I have some alerting that will be triggered by missing values, having the old values reused disables my alerts.
Any idea on how to fix this?
If you want to show continuous graph, then there is a hack.
Apply mean() and group by()
For example, something like this:
Select mean("fieldName") from measurement where time > now() -1h group by time(10s) fill(0)