Setting up JDBC password dynamically on Apache Zeppelin - jdbc

Is it possible to set the default.password dynamically e.g. from a file? We have connected Presto to Zeppelin with a JDBC connector successfully, however we are using a different authentication method that requires us to renew the password every day. I have checked the current gitHub repository and found out that there is an interpreter.json that takes in default.password from the interpreter settings on Zeppelin. If I change the default.password to an environment variable, will it affect other JDBC interpreters. Is there a workaround?
Links to the repository:
https://github.com/apache/zeppelin/blob/e63ba8e897a522c6cad099286110c2eaa1496912/jdbc/src/main/resources/interpreter-setting.json
https://github.com/apache/zeppelin/blob/8f45fefb1c45ab163bedb94e3d9a9ef8a35afd91/jdbc/src/main/java/org/apache/zeppelin/jdbc/JDBCInterpreter.java

I figured out the problem. The interpreter.json in the config file stores all the information of each JDBC connection. So, by updating the password with jq command and restarting Zeppelin every day, this will update the password dynamically.

Related

Nifi docker installation

I'm just new to Nifi. I was able to install Nifi and see it in webbrowser. Now as next step i want to connect to sql server, nevertheless it seems i have to install jdbc as well and here is my issue when i look at tutorials all referencing to something called "docker" and advising to install jdbc from there. When i go into cmd and type docker cmd not recognize it. Can anyone tell me how to install it and what it is?
There is no need of docker for this use case.
All you have to do is download and install SQL server from official downloads page, if you don't have server setup.
Installation guide - https://learn.microsoft.com/en-us/sql/database-engine/install-windows/install-sql-server?view=sql-server-2017
You also need to download jar file which has JDBC driver stuff - https://learn.microsoft.com/en-us/sql/connect/jdbc/microsoft-jdbc-driver-for-sql-server?view=sql-server-2017
In NiFi, you can use PutDatabaseRecord processor to insert/update/delete rows from table. This processor internally uses DBCPConnectionPool controller services to get database connections.
DBCPConnectionPool controller service requires below properties to be set.
Database connection url - jdbc:sqlserver://localhost:1433;databaseName=dbname
Driver class name -
com.microsoft.sqlserver.jdbc.SQLServerDriver
Driver (jar) location -
/tmp/sqlserver.jar (Example only)
PutDatabaseRecord Processor
DBCPConnectionPool controller service
I think you may want to google how to install Docker and what it is, it's already explained in many places.

PyHive ignoring Hive config

I'm intermittently getting the error message
DAG did not succeed due to VERTEX_FAILURE.
when running Hive queries via PyHive. Hive is running on an EMR cluster where hive.vectorized.execution.enabled is set to false in the hive-site.xml file for this reason.
I can set the above property through the configuration on the Hive connection and my query has run successfully every time I've executed it, however I want to confirm that this has fixed the issue and that it is definitely the case that hive-site.xml is being ignored.
Can anyone confirm if this is the expected behavior, or alternatively is there any way to inspect the Hive configuration via PyHive as I've not been able to find any way of doing this?
Thanks!
PyHive is a thin client that connects to HiveServer2, just like a Java or C client (via JDBC or ODBC). It does not use any Hadoop configuration files on your local machine. The HS2 session starts with whatever properties are set server-side.
Same goes for ImPyla BTW.
So it's your responsibility to set custom session properties from your Python code, e.g. execute this statement...
SET hive.vectorized.execution.enabled =False
... before running your SELECT.

Mule Connect to remote flat files

I am new to Mule and I have been struggling with a simple issue for a while now. I am trying to connect to flat files (.MDB, .DBF) located on a remote desktop through my Mule application using the generic database connector of Mule. I have tried different things here:
I am using StelsDBF and StelsMDB drivers for the JDBC connectivity. I tried connecting directly using jdbc URL - jdbc:jstels:mdb:host/path
I have also tried to access through FTP by using FileZilla server on remote desktop and using jdbc URL in my app - jdbc:jstels:dbf:ftp://user:password#host:21/path
None of these seem to be working as I am always getting Connection exceptions. If anyone has tried this before, what is the best way to go about it? Connecting a remote flat file with Mule? Your response on this will be greatly appreciated!
If you want to load the contents of the file inside a Mule flow you should use the file or FTP connector, i don't know for sure about your JDBC option.
With the File connector you can access local files (files on the server where mule is running), you could try to mount the folders as a share.
Or run an FTP server like you already tried, that should work.
There is probably an error in your syntax / connection.
Please paste the complete XML of your Mule flow so we can see what you are trying to do.
Your usecase is still not really clear to me, are you really planning to use http to trigger the DB everytime? Anyway did you try putting the file on a local path and use that path in your database url. Here is someone that says he had it working, he created a separate bean.
http://forums.mulesoft.com/questions/6422/setting_property_dynamically_on_jdbcdatasource.html
I think a local path is maybe possible and it's better to test that first.
Also take note of how to refer to a file path, look at the examples for the file connector: https://docs.mulesoft.com/mule-user-guide/v/3.7/file-transport-reference#namespace-and-syntax
If you manage to get it working and you can use the path directly in the JDBC url, you should have a look at the poll scope.
https://docs.mulesoft.com/mule-user-guide/v/3.7/poll-reference
You can use your DB connector as an inbound endpoint when wrapped in a poll scope.
I experienced the same issue when connect to Microsoft Access Database (*.mdb, *.accdb) using Mule Database Connector. After further investigation, it's solved by installing Microsoft Access Database Engine
Another issue, I couldn't pass parameter to construct a query as same as I do for other databases. e.g.: SELECT * FROM emplcopy WHERE id = #[payload.id]
To solve this issue:
I changed the Query type from Parameterized into Dynamic.
I generated the query inside Set Payload transformer (generate the query in form of String, e.g.: SELECT * FROM emplcopy WHERE id = '1').
Finally, put it into the Dynamic query area: #[payload]

How to use DATABSE_URL in Play 2.0 (Scala) for local connection to PostgreSQL 9.1 & Heroku?

I have developed a first web application using a local PostgreSQL 9.1 on OSX (Lion 10.7.4) using Play! Framework 2.0.3. I started with my database connection defined in conf/application.conf (relative to application's directory) with
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:postgresql://localhost/fotoplay"
db.default.user=foo
db.default.password=bar
(username and password have been changed before posting) This works for writing and testing. I now want to deploy to Heroku. I have set up the Procfile (in application's directory) with a single line:
web: target/start -Dhttp.port=${PORT} ${JAVA_OPTS}
-DapplyEvolutions.default=false
-Ddb.default.driver=org.postgresql.Driver
-Ddb.default.url=$DATABASE_URL
I have exported DATABASE_URL in .bash_profile so that I can echo $DATABASE_URL at the system command line and get
postgres://foo:bar#localhost/photoplay
At Heroku, I have set up an instance of postgresql. I'm not sure on how to write a database evolution to populate heroku, so I turned that off evolutions and populated my database manually. My current evolutions build tables and populate them with some initial data from some csv files, so I would need to place the csv files somewhere accessible to heroku. At least for a first pass, I don't want to tackle this issue just yet. But as a result of the manual steps, I have a populated database on Heroku.
With this configuration, my application runs correctly locally. However, I am not using DATABASE_URL, which seems wrong. I pushed this to HEROKU and is is not able to connect to the database. The error is:
Caused by: org.postgresql.util.PSQLException:
FATAL: password authentication failed for user "foo"
If I remove the username & password, I break my local configuration. I should be able to use DATABASE_URL 'instead of' the older syntax, but I don't quite know how to do that. I don't think that I can figure this out experimentally (wandering high dimensional spaces is hopeless, and there are several possible configuration settings that might be involved & are subject to typos). Any guidance on how I should set up application.conf would be appreciated.
Worth a try: in your application.conf file, change your db.default.url parameter in order to include the username & password in the URL, and remove the db.default.user and db.default.password parameters:
db.default.url="postgres://foo:bar#localhost/fotoplay"

Adding a jdbc driver to pentaho design studio and configuring the datasource

I've just integrated pentaho's design studio into the BI server. Does anyone know how to add mysql jdbc drivers. I need to connect in order to define the relational action process.
In my research I found:
http://wiki.bizcubed.com.au/xwiki/bin/view/Pentaho%20Tutorial/Install%20Pentaho%20Design%20Studio#Comments
which specifies selecting
JDBC Driver, Edit, Extra Class Path from Preferences but no such preference exists,
http://forums.pentaho.com/showthread.php?85148-Design-Studio-xaction-database-connection-dropdown-list-empty&highlight=add+jdbc+driver+to+design+studio
which resulted in me creating a jdbc folder in which I placed the drivers in plugins\org.pentaho.designstudio.editors.actionsequence_4.0.0.stable\lib\
but just as the author of the thread I'm stuck
http://forums.pentaho.com/showthread.php?53303-Create-a-new-datasource&highlight=add+jdbc+driver+to+design+studio
suggests that:
3. If you are using the Pentaho DesignStudio you have to copy your jdbc (JAR files) to the plugins directory (in pentaho plugin) so you can develop, deploy and run your applications. This apply also to eclipse plugin (If you have now an Eclipse).
Which resulted in me placing the jar files in the plugin directory to no avail.
http://forums.pentaho.com/showthread.php?53715-Can-t-add-new-datasource-GA-version&highlight=add+jdbc+driver+to+design+studio
talks of a directory, rdw which does not exist
Any form of assistance will be greatly appreciated.
You have to configure the datasource by adding a Relational Process Action to your .xaction in the Pentaho Design Studio wherein you can specify the JDBC Driver, Username, Password and the Database URL. But first you have to put your MySQL JAR file in your lib folder /path/to/biserver-ce/tomcat/lib
You will also have to save your *.xaction file/s in the pentaho-solutions folder /path/to/biserver-ce/pentaho-solutions in order for your *.xaction files to connect to the database which you have assigned in your Relational Process Action.
I encountered the same problem and solved as follow
place mysql-connector-java-5.1.17.jar under (bi server path)\tomcat\lib\ folder
start Pentaho Admin Console (PAC) http://127.0.0.1:8099 with
user: admin
password: password
and add a connection there
use the name of the connection just created for action sequence as JNDI
The problem solved for me.

Resources