i am using pyramid_beaker as session factory .I want to save session in mysql database.so i want to know how to configure that?
i have gone through this
http://docs.pylonsproject.org/projects/pyramid_beaker/en/latest/
but it does not solve my problem.
it does not give clue where to write mysql username ,password etc.
pyramid_beaker is a thin wrapper around beaker which can pull the settings from your INI file into beaker. Beaker [1] itself which contains docs on how to use its various backends. For example, if you're using the beaker.ext.database backend, then you should set session.url = mysql://user:password#host:port/dbname just like any other SQLAlchemy connection string.
[1] https://beaker.readthedocs.io/en/latest/configuration.html#options-for-sessions-and-caching
Related
I am using prometheus as datasource for the grafana dashboard. I am adding the Mesh IP as the URL of the default datasource. Whenever the grafana runs, it creates grafana.db which contains all the information related to datasource. I need to work in such a way that user can change the default URL of the datasource. Till now, everything works very well.
Now my problem is, when I try to change the IP of default datasource, and when I run the container again, it again picks the default URL instead of last saved URL in the grafana.db file. I want it to work in such a way that it should read default datasource IP from grafana.db if the file is available otherwise read it from default Mesh IP.
I can think of two different approaches for this:
Calling some queries using Postgres.
Get notified from GUI whenever URL is changed by the user and update that URL in the variable.
I am completely lost how to solve this problem. Anyone please help me how I can solve this problem using above mentioned approaches or any other one.
Thanks in advance.
The grafana.db resorts to the old default URL because the data is not being persisted across restarts.
For data persistence, you need to map Grafana to an external DB. Install another db outside docker and use the following link to map it to Grafana: database_configuration
Also look at provisioning
Is it possible to set the default.password dynamically e.g. from a file? We have connected Presto to Zeppelin with a JDBC connector successfully, however we are using a different authentication method that requires us to renew the password every day. I have checked the current gitHub repository and found out that there is an interpreter.json that takes in default.password from the interpreter settings on Zeppelin. If I change the default.password to an environment variable, will it affect other JDBC interpreters. Is there a workaround?
Links to the repository:
https://github.com/apache/zeppelin/blob/e63ba8e897a522c6cad099286110c2eaa1496912/jdbc/src/main/resources/interpreter-setting.json
https://github.com/apache/zeppelin/blob/8f45fefb1c45ab163bedb94e3d9a9ef8a35afd91/jdbc/src/main/java/org/apache/zeppelin/jdbc/JDBCInterpreter.java
I figured out the problem. The interpreter.json in the config file stores all the information of each JDBC connection. So, by updating the password with jq command and restarting Zeppelin every day, this will update the password dynamically.
How to use ligth4j to operate mysql? The demo result from github seems incomplete. My project reference link is https://github.com/networknt/light-example-4j
Light4J doesn't implement anything to operate mysql, you could use directly the java libraries to do it (java.sql, javax.sql). If you need it, you could initialize a connection pool settings with StartupHookProvider and create your own MysqlStartupHookProvider
(review the mongoDb example: https://github.com/networknt/light-example-4j/blob/release/mongodb/src/main/java/com/networknt/database/db/MongoStartupHookProvider.java)
I am new to Mule and I have been struggling with a simple issue for a while now. I am trying to connect to flat files (.MDB, .DBF) located on a remote desktop through my Mule application using the generic database connector of Mule. I have tried different things here:
I am using StelsDBF and StelsMDB drivers for the JDBC connectivity. I tried connecting directly using jdbc URL - jdbc:jstels:mdb:host/path
I have also tried to access through FTP by using FileZilla server on remote desktop and using jdbc URL in my app - jdbc:jstels:dbf:ftp://user:password#host:21/path
None of these seem to be working as I am always getting Connection exceptions. If anyone has tried this before, what is the best way to go about it? Connecting a remote flat file with Mule? Your response on this will be greatly appreciated!
If you want to load the contents of the file inside a Mule flow you should use the file or FTP connector, i don't know for sure about your JDBC option.
With the File connector you can access local files (files on the server where mule is running), you could try to mount the folders as a share.
Or run an FTP server like you already tried, that should work.
There is probably an error in your syntax / connection.
Please paste the complete XML of your Mule flow so we can see what you are trying to do.
Your usecase is still not really clear to me, are you really planning to use http to trigger the DB everytime? Anyway did you try putting the file on a local path and use that path in your database url. Here is someone that says he had it working, he created a separate bean.
http://forums.mulesoft.com/questions/6422/setting_property_dynamically_on_jdbcdatasource.html
I think a local path is maybe possible and it's better to test that first.
Also take note of how to refer to a file path, look at the examples for the file connector: https://docs.mulesoft.com/mule-user-guide/v/3.7/file-transport-reference#namespace-and-syntax
If you manage to get it working and you can use the path directly in the JDBC url, you should have a look at the poll scope.
https://docs.mulesoft.com/mule-user-guide/v/3.7/poll-reference
You can use your DB connector as an inbound endpoint when wrapped in a poll scope.
I experienced the same issue when connect to Microsoft Access Database (*.mdb, *.accdb) using Mule Database Connector. After further investigation, it's solved by installing Microsoft Access Database Engine
Another issue, I couldn't pass parameter to construct a query as same as I do for other databases. e.g.: SELECT * FROM emplcopy WHERE id = #[payload.id]
To solve this issue:
I changed the Query type from Parameterized into Dynamic.
I generated the query inside Set Payload transformer (generate the query in form of String, e.g.: SELECT * FROM emplcopy WHERE id = '1').
Finally, put it into the Dynamic query area: #[payload]
I have a little java program that reads a db2 table via jdbc. This program is invoked via "tso bpxbatch myjavatool".
I wonder if there is the possibility to "pass" the username/password of my TSO user to the JDBC driver?
For example, if I connect to DB2 with a simple REXX script I don't have to specify my username/password again and DB2/RACF checks if my user is allowed to execute the SQLs.
Now my java tool is not running in my TSO address space but under the control of the J9 in the USS address space...
Is there also a way to automatically log in to DB2 with the current TSO user?
I don't know too much about BPXBATCH, but I assume you are still running under your own userid in the USS-address-space.
In your java-code you should be able to get your userid via
String user = System.getProperty("user.name");
As for the password you could try using RACF-Passtickets instead. There is a library IRRRacf.jar in /user/include/java_classes and the corresponding javadoc in IRRRacfDoc.jar in the same directory. the code for generating the Passticket is rather simple:
IRRPassTicket generator = new IRRPassTicket();
String ptkt = generator.generate(user,applid);
then just pass the passticket instead of the password and you should be fine.
Alas, there's several aspects you have to make sure of before using this approach:
Set up RACF to use Passtickets for DB2 - it might already be configured, else you'll have to set up proper profiles in the PTKTDATA-class (See RACF-documentation for more details)
Make sure each user running the code has the proper RACF authorization to use the r_ticketserv callable service (again, see RACF documentation)
Find the correct application-name (applid) for your DB2-system. See the DB2-documentation about using passtickets.