We need to enable milliseconds in only one specific facility of ryslog. I think $ActionFileDefaultTemplate RSYSLOG_FileFormat would enable milliseconds for all syslogs not to a specific local facility.
How to add this template just to a local facility of the below kind:
local5.* /opt/logs/my_app.log
You could create a named template, that outputs the milliseconds, and
them bind it to the action like:
local5.* /opt/logs/my_app.log;template-name
See this for further info: https://www.thegeekdiary.com/understanding-rsyslog-templates/
Related
I am using:
var registry = prometheus.NewRegistry()
I know I can do for some goroutine (etc.) related metrics:
registry.MustRegistry(collectors.NewGoCollector());
But, I cannot see http metrics which I see when using default registry, like:
promhttp_metric_handler_requests_total{code="200"} 1
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
How to bring these metrics?
Also, I cannot see http_requests_total irrespective of the registry I use. Is there a way to automatically expose it (instead of defining it)?
You also need to register your custom defined metric like http_requests_total (in this case).
This should solve your problem :
`registry.MustRegistry(promHttpRequestTotal)
Here promHttpRequestTotal is the variable in which your metric http_requests_total is defined.
I want to add a new Setting to Clickhouse codebase.
Now after doing the changes and compiling Clickhouse I want to test it.
Can I set that setting during authentication using clickhouse-client?
eg let's say there is a setting named max_concurrent_queries_for_user
./clickhouse-client --port 6667 --send_logs_level=trace SET max_concurrent_queries_for_user=100
I can log in like this, but now sure if the setting is applied or not.
clickhouse-client has a rich set of options.
To get a full list of available options run the command:
clickhouse-client --help
Main options:
...
--max_concurrent_queries_for_user arg The maximum number of concurrent
requests per user.
--insert_deduplicate arg For INSERT queries in the replicated
table, specifies that deduplication of
insertings blocks should be preformed
...
*/
The option --max_concurrent_queries_for_user arg defines the "The maximum number of concurrent requests per user".
I want to build some sort of materialized view on the system. merges, metrics, asynchronous_metrics, so I get a time-series view of system health(memory consumption, etc).
How is this possible I tried for the system. merges but all I get are the currently running merges?
You can use following variants:
export metrics via graphite protocol to clickhouse itself:
turn on graphite export https://clickhouse.tech/docs/en/operations/server_settings/settings/#server_settings-graphite
use https://github.com/lomik/graphite-clickhouse for storage exported data back to clickhouse
complete vagrant demo stand here: https://github.com/Slach/clickhouse-metrics-grafana/
use undocumented system.metric_log table
look at https://github.com/ClickHouse/ClickHouse/issues/6363 and https://github.com/ClickHouse/ClickHouse/search?q=metric_log, https://github.com/ClickHouse/ClickHouse/blob/master/dbms/programs/server/config.d/metric_log.xml
turn on system.metric_log in /etc/clickhouse-server/config.d/metric_log.xml
<yandex>
<metric_log>
<database>system</database>
<table>metric_log</table>
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
<collect_interval_milliseconds>1000</collect_interval_milliseconds>
</metric_log>
</yandex>
be careful, according to https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Interpreters/MetricLog.cpp#L18 system.asynchronous_metrics doesn't flush into system.metric_log
I deployed Prometheus Node Exporter pod on k8s. It worked fine.
But when I try to get system metrics by calling Node Exporter metric API in my custom Go application
curl -X GET "http://[my Host]:9100/metrics"
The result format was like this
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.7636e-05
go_gc_duration_seconds{quantile="0.25"} 2.466e-05
go_gc_duration_seconds{quantile="0.5"} 5.7992e-05
go_gc_duration_seconds{quantile="0.75"} 9.1109e-05
go_gc_duration_seconds{quantile="1"} 0.004852894
go_gc_duration_seconds_sum 1.291217651
go_gc_duration_seconds_count 11338
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 8
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.12.5"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.577128e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.0073577064e+10
.
.
.
something like this
Those long texts are hard to parse and I want to get the results in JSON format to parse them easily.
https://github.com/prometheus/node_exporter/issues/1062
I checked Prometheus Node Exporter GitHub Issues and someone recommended prom2json.
But this is not I'm looking for. Because I have to run extra process to execute prom2json to get results. I want to get Node Exporter's system metric by simply calling HTTP request or some kind of Go native packages in my code.
How can I get those Node Exporter metrics in JSON format?
You already mentioned prom2json and you can pull the package into your Go file by importing github.com/prometheus/prom2json.
The sample executable in the repo has the all building blocks you need. First, open the URL and then use the prom2json package to read the data and store the result.
However, you should also have a look at expfmt.TextParser as that is the native way to ingest Prometheus formatted metrics.
Anybody please let me know, how to get the response time of each transaction in seconds/minutes/hrs not by milliseconds ?
How to configure the test plan for each transactions separately in jmeter?
for example, transaction : load task, transaction : save task,
transaction : login and transaction : sign out.
in seconds/minutes/hrs ?
It isn't possible on "per transaction" basis but it's something that you configure globally.
Looking into jmeter.properties file in /bin folder of JMeter installation:
# Timestamp format - this only affects CSV output files
# legitimate values: none, ms, or a format suitable for SimpleDateFormat
#jmeter.save.saveservice.timestamp_format=ms
#jmeter.save.saveservice.timestamp_format=yyyy/MM/dd HH:mm:ss.SSS
See SimpleDateFormat JavaDoc for possible values.
You can specify format required by uncommenting and altering jmeter.save.saveservice.timestamp_format property as follows:
jmeter.save.saveservice.timestamp_format=HH:mm:ss
Or specify it as JMeter startup script parameter as:
jmeter -Jjmeter.save.saveservice.timestamp_format=HH:mm:ss -n -t /path/to/your/script.jmx -l /path/to/logfile.jtl
See Apache JMeter Properties Customization Guide for more details on how to make use of different JMeter properties.