I am facing the ReceiveTimeout while doing datastore backup using the rest utility provided by opendaylight.
However I am not able to manipulate this value to some more as it seems hardcoded in the Java code.
Can someone know where can this be modified and which field to modify it to more value.
Tried looking into "org.opendaylight.controller.cluster.datastore.cfg" but could not find any filed to modify this timeout.
Related
I am using prometheus as datasource for the grafana dashboard. I am adding the Mesh IP as the URL of the default datasource. Whenever the grafana runs, it creates grafana.db which contains all the information related to datasource. I need to work in such a way that user can change the default URL of the datasource. Till now, everything works very well.
Now my problem is, when I try to change the IP of default datasource, and when I run the container again, it again picks the default URL instead of last saved URL in the grafana.db file. I want it to work in such a way that it should read default datasource IP from grafana.db if the file is available otherwise read it from default Mesh IP.
I can think of two different approaches for this:
Calling some queries using Postgres.
Get notified from GUI whenever URL is changed by the user and update that URL in the variable.
I am completely lost how to solve this problem. Anyone please help me how I can solve this problem using above mentioned approaches or any other one.
Thanks in advance.
The grafana.db resorts to the old default URL because the data is not being persisted across restarts.
For data persistence, you need to map Grafana to an external DB. Install another db outside docker and use the following link to map it to Grafana: database_configuration
Also look at provisioning
How do I set any external database (mysql, postgres I'm not concerned with which one at this point) for usage with metadata?
At the moment I have spring batch writing the results of jobs to Mongodb and that works fine but I'm not keeping track of job status so the jobs are being run from the start every time even if interrupted halfway though.
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
I attempted adding a properties file but that had no effect
# for Postgres:
batch.jdbc.driver=org.postgresql.Driver
batch.jdbc.url=jdbc:postgresql://localhost/postgres
batch.jdbc.user=postgres
batch.jdbc.password=mysecretpassword
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.PostgreSQLSequenceMaxValueIncrementer
batch.schema.script=classpath:/org/springframework/batch/core/schema-postgresql.sql
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-postgresql.sql
batch.jdbc.testWhileIdle=false
batch.jdbc.validationQuery=
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
You need to configure a bean of type DataSource in your batch application context (or extend the DefaultBatchConfigurer and set the data source you want to use to store meta-data).
There are many samples here: https://github.com/spring-projects/spring-batch/tree/master/spring-batch-samples
You can find the data source configuration here: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-samples/src/main/resources/data-source-context.xml
I wrote a jmeter plugin which is used to analyze jmeter sample event, and send the results to a specified server. In this plugin, user can config the server IP/Port etc. These configurations normally use the variable and read them from given CSV file(using CSV config plugin). For example, the variables could be filled like: ${serverIP}, ${serverPort} on plugin GUI.
Ok, my jmx and plugin with these configurations can work well in single mode. But now, I have requirement to run them in Master-Slave mode, the problem is coming, the ${serverIP}, ${serverPort} can't not be converted to real value in Master side. I added some logs in my plugin codes which I used Jemter API to read its value
public String getServerIP() {
return getPropertyAsString(SERVER_IP);
}
and found jmeter straightly return me with the variable name: ${serverIP}, not like what it did in single mode that convert them to real value.
I googled and searched Jmeter doc, but nothing found, How could I let jmeter read configured variable from given CSV file in Master side(I know it could do this in Slave side). Can't Jmeter support it?
Any help are really appreciated! If you need more infos, please let me know, Thanks a lot!
So i'm essentially using a column in my parse server db as a configuration for my app. I now realize parse has a config feature specifically for this.
My question is which is faster/better to use and why? Hitting a query every single time i want to check, or check the parse config. Isn't checking the config variable just another query?
If anyone can provide proper/better/ or any documentation...
ParseConfig is faster because it store the config in the cache while ParseQuery will retrieve it from the DB. But you cannot use ParseConfig for every query operation but only to get things that are relevant to all of your users like: "Message of the day" and more.
Another very cool thing about ParseConfig is that if you don't have network connectivity it will fallback automatically to the client cache.
I am new to Mule and I have been struggling with a simple issue for a while now. I am trying to connect to flat files (.MDB, .DBF) located on a remote desktop through my Mule application using the generic database connector of Mule. I have tried different things here:
I am using StelsDBF and StelsMDB drivers for the JDBC connectivity. I tried connecting directly using jdbc URL - jdbc:jstels:mdb:host/path
I have also tried to access through FTP by using FileZilla server on remote desktop and using jdbc URL in my app - jdbc:jstels:dbf:ftp://user:password#host:21/path
None of these seem to be working as I am always getting Connection exceptions. If anyone has tried this before, what is the best way to go about it? Connecting a remote flat file with Mule? Your response on this will be greatly appreciated!
If you want to load the contents of the file inside a Mule flow you should use the file or FTP connector, i don't know for sure about your JDBC option.
With the File connector you can access local files (files on the server where mule is running), you could try to mount the folders as a share.
Or run an FTP server like you already tried, that should work.
There is probably an error in your syntax / connection.
Please paste the complete XML of your Mule flow so we can see what you are trying to do.
Your usecase is still not really clear to me, are you really planning to use http to trigger the DB everytime? Anyway did you try putting the file on a local path and use that path in your database url. Here is someone that says he had it working, he created a separate bean.
http://forums.mulesoft.com/questions/6422/setting_property_dynamically_on_jdbcdatasource.html
I think a local path is maybe possible and it's better to test that first.
Also take note of how to refer to a file path, look at the examples for the file connector: https://docs.mulesoft.com/mule-user-guide/v/3.7/file-transport-reference#namespace-and-syntax
If you manage to get it working and you can use the path directly in the JDBC url, you should have a look at the poll scope.
https://docs.mulesoft.com/mule-user-guide/v/3.7/poll-reference
You can use your DB connector as an inbound endpoint when wrapped in a poll scope.
I experienced the same issue when connect to Microsoft Access Database (*.mdb, *.accdb) using Mule Database Connector. After further investigation, it's solved by installing Microsoft Access Database Engine
Another issue, I couldn't pass parameter to construct a query as same as I do for other databases. e.g.: SELECT * FROM emplcopy WHERE id = #[payload.id]
To solve this issue:
I changed the Query type from Parameterized into Dynamic.
I generated the query inside Set Payload transformer (generate the query in form of String, e.g.: SELECT * FROM emplcopy WHERE id = '1').
Finally, put it into the Dynamic query area: #[payload]