How to get notification for updated IP in grafana datasource? - performance

I am using prometheus as datasource for the grafana dashboard. I am adding the Mesh IP as the URL of the default datasource. Whenever the grafana runs, it creates grafana.db which contains all the information related to datasource. I need to work in such a way that user can change the default URL of the datasource. Till now, everything works very well.
Now my problem is, when I try to change the IP of default datasource, and when I run the container again, it again picks the default URL instead of last saved URL in the grafana.db file. I want it to work in such a way that it should read default datasource IP from grafana.db if the file is available otherwise read it from default Mesh IP.
I can think of two different approaches for this:
Calling some queries using Postgres.
Get notified from GUI whenever URL is changed by the user and update that URL in the variable.
I am completely lost how to solve this problem. Anyone please help me how I can solve this problem using above mentioned approaches or any other one.
Thanks in advance.

The grafana.db resorts to the old default URL because the data is not being persisted across restarts.
For data persistence, you need to map Grafana to an external DB. Install another db outside docker and use the following link to map it to Grafana: database_configuration
Also look at provisioning

Related

Oracle ORDS: Get request returns old data, then after period of time the changed data

I am having a problem with the Oracle Rest Data Services (short ORDS) and I can't find a solution.
The Problem is as follows:
We are using ORDS via a TomCat Webserver and I have 2 Endpoints defined, one to Update a dataset and one to get all datasets from this table.
If I update the value via my Endpoint the change is written in the Table, but if I try to get the table with this change ORDS only response with the old not changed table. After a certain period of Time while constantly trying to get the change it repondes with the expected values. (happens after max 1 minute, can be earlier).
Because of this behaviour I accused some type of caching, but I cant find no configuration in the oracle database or on the TomCat.
Another Point for this theory was that I logged what happens in my GET procedure and found that only the one request with the correct values gets logged, like the others didnt even happen ..
The Request giving me the old value are coming back in the 4-8 ms range while the request with the correct data is in the 100-200 ms.
Ty for your help :)
I tried logging what happens, but I got that only the request with the fresh values was logged.
I tried to restart the TomCat Webserver to make sure that the cache is cleared, but this didnt fix the Problem
I searched for a configuration in ORDS or oracle where a cache would be defined, but it was never set.
I tried to set the value via a SQL update and not an endpoint, but even here I get the change only delayed
Do you have a full overview of the communication path? Maybe there is a proxy between?
When the TomCat has no caching configuration and you restartet the webserver during your tests and still have the same issue, then there is maybe more...
Kind regards
M-Achilles

Elastic Cloud APM not showing logs in Transactions Page

What makes Kibana to not show docker container logs in APM "Transactions" page under "Logs" tab.
I verified the logs are successfully being generated with the "trace.id" associated for proper linking.
I have the exact same environment and configs (7.16.2) up via docker-compose and it works perfectly.
Could not figure out why this feature works locally but does not show in Elastic Cloud deploy.
UPDATE with Solution:
I just solved the problem.
It's related to the Filebeat version.
From 7.16.0 and ON, the transaction/logs linking stops working.
Reverted Filebeat back to version 7.15.2 and it started working again.
If you are not using file beats, for example - We rolled our own logging implementation to send logs from a queue in batches using the Bulk API.
We have our own "ElasticLog" class and then use Attributes to match the logs-* Schema for the Log Stream.
In particular we had to make sure that trace.id was the same as the the actual Traces, trace.id property. Then the logs started to show up here (It does take a few minutes sometimes)
Some more info on how to get the ID's
We use OpenTelemetry exporter for Traces and ILoggerProvider for Logs. The fire off batches independently of each other.
We populate the Trace Id's at the time of instantiation of the class as a default value. This way you in the context of the Activity. Also helps set the timestamp exactly when the log was created.
This LogEntry then gets passed into the ElasticLogger processor and mapped as displayed above to the ElasticLog entry with the Attributes needed for ES

Consul KV store endpoints

I am working on designing a little project where I need to use Consul to manage application configuration in a dynamic way so that all my app machines can get the configuration at the same time without any inconsistency issue. We are using Consul already for service discovery purpose so I was reading more about it and it looks like they have a Key/Value store which I can use to manage my configurations.
We already have a Consul up and running and below is the url I get if I click Key/Value store tab:
http://consul.host.orcld.com/ui/#/dc1/kv/
I am trying to do below things with the Consul through command line as of now:
Create new key/value in Consul.
Update value of existing key.
Keep a watch on the existing key so that if value changes then I get notified and it can show me the new value of that key.
Now I already have few keys created with some values in it through ui so I was thinking to get value of that key. Below is the image but I am confuse on how can I get the value of this key in the command line:
I tried with below curl call but it doesn't give me the value of it as I get 404 Not Found? Am I doing anything wrong here?
curl -XGET http://consul.host.orcld.com/vi/kv/example/reaper
Also how can I create new key/value and keep a watch on existing key through command line as well?
Try this below format, replace v1 instead of vi
curl http://127.0.0.1:8500/v1/kv/example/reaper
Documentation : https://www.consul.io/api/kv.html

opendaylight how to configure ReceiveTimeout for CaptureSnapshotReply

I am facing the ReceiveTimeout while doing datastore backup using the rest utility provided by opendaylight.
However I am not able to manipulate this value to some more as it seems hardcoded in the Java code.
Can someone know where can this be modified and which field to modify it to more value.
Tried looking into "org.opendaylight.controller.cluster.datastore.cfg" but could not find any filed to modify this timeout.

Mule Connect to remote flat files

I am new to Mule and I have been struggling with a simple issue for a while now. I am trying to connect to flat files (.MDB, .DBF) located on a remote desktop through my Mule application using the generic database connector of Mule. I have tried different things here:
I am using StelsDBF and StelsMDB drivers for the JDBC connectivity. I tried connecting directly using jdbc URL - jdbc:jstels:mdb:host/path
I have also tried to access through FTP by using FileZilla server on remote desktop and using jdbc URL in my app - jdbc:jstels:dbf:ftp://user:password#host:21/path
None of these seem to be working as I am always getting Connection exceptions. If anyone has tried this before, what is the best way to go about it? Connecting a remote flat file with Mule? Your response on this will be greatly appreciated!
If you want to load the contents of the file inside a Mule flow you should use the file or FTP connector, i don't know for sure about your JDBC option.
With the File connector you can access local files (files on the server where mule is running), you could try to mount the folders as a share.
Or run an FTP server like you already tried, that should work.
There is probably an error in your syntax / connection.
Please paste the complete XML of your Mule flow so we can see what you are trying to do.
Your usecase is still not really clear to me, are you really planning to use http to trigger the DB everytime? Anyway did you try putting the file on a local path and use that path in your database url. Here is someone that says he had it working, he created a separate bean.
http://forums.mulesoft.com/questions/6422/setting_property_dynamically_on_jdbcdatasource.html
I think a local path is maybe possible and it's better to test that first.
Also take note of how to refer to a file path, look at the examples for the file connector: https://docs.mulesoft.com/mule-user-guide/v/3.7/file-transport-reference#namespace-and-syntax
If you manage to get it working and you can use the path directly in the JDBC url, you should have a look at the poll scope.
https://docs.mulesoft.com/mule-user-guide/v/3.7/poll-reference
You can use your DB connector as an inbound endpoint when wrapped in a poll scope.
I experienced the same issue when connect to Microsoft Access Database (*.mdb, *.accdb) using Mule Database Connector. After further investigation, it's solved by installing Microsoft Access Database Engine
Another issue, I couldn't pass parameter to construct a query as same as I do for other databases. e.g.: SELECT * FROM emplcopy WHERE id = #[payload.id]
To solve this issue:
I changed the Query type from Parameterized into Dynamic.
I generated the query inside Set Payload transformer (generate the query in form of String, e.g.: SELECT * FROM emplcopy WHERE id = '1').
Finally, put it into the Dynamic query area: #[payload]

Resources