I am using the below driver to connect to AS400 system.
“jdbc:as400://system-name/default-schema;properties”
I have a requirement where in i have to deal with multiple schema's.
As the schema name need to be mentioned in the JDBC URL, do i need to open separate connection pool for each schema i am trying to connect?
Currently there are two connection pool i am using for two different schema pointing to same DB properties.
Is there any other way to deal with multiple schema with single connection.
A schema is more commonly referred to as a library on the IBM i (AS/400).
You can use a single database connection and qualify the table names with schema.table for the default SQL naming convention or schema/table with the system naming convention.
See the "naming" property in the IBM Toolbox for Java JDBC properties section of the Toolbox programmer's guide and SQL and system naming conventions topic in the SQL Programming guide for more information.
By using "system naming" settings your session can take advantage of a "library list" attribute which each job has. It is a list of schemes that is searched when the system is resolving the location of an unqualified object. The concept is similar to the notion of a path in Windows or Linux.
In addition to the links that #JamesA provided, also read the two part article by Birgitta Hauser, and the SQL Reference on unqualified names.
It is commonly considered best practice to use the session (ie job) library list, rather than statically hardcoding schema names. I suggest you follow this practice. While the terms schema and library are essentially synonymous, I use the IBM i command CHGCURLIB, rather than SET CURRENT SCHEMA, because the command does not restrict the behavior of SQL regarding the library list. But my understanding from Birgitta's article, is that SET CURRENT SCHEMA blocks the use of the library list entirely. The current library becomes the first library on (the user portion of) your library list.
Related
Is it possible to use the saiku-ui component with a different jolap provider than mondrian, or with a different server backend than the saiku-server component?
I have been looking but I have not found an architecture description of how these pieces fit together and what interfaces they use to communicate. Can anyone point me towards an understanding of what the saiku-ui wants to speak with and what the saiku-server is providing?
The reason for my interest is that I have a set of data spread across hundreds of csv files that I would like to query with a pivot and charting tool. It looks like the standard way to use this with saiku would be to have an ETL process to load in to an RDBMS. However, this would not be a simple process because the files and content and the way the files relate to each other vary, so the ETL would have to do a lot of inspection of the data sources to figure it out.
Given this it seems to me that I would have three options in how to use saiku:
1) write a complex ETL to load in to a rdbms, and then use a standard jdbc driver to provide the data to modrian. A side function of the ETL would be to analyze the inputs and write the mondrian schema file describing the cubes.
2) write a jdbc driver to access the data natively. This driver would parse sql and provide access to the underlying tables. Essentially this would be a custom r/o dbms written on top of the csv files. The jdbc connection would be used by mondrian to access the data. A side function of this custom dbms would be to produce the mondrian schema file.
3) write a tool that provides a jolap interface to the native data (accepts discovery and mdx queries). This would bypass mondrian entirely and interface with the ui.
I may be a bit naive here but I consider each of the three options to be feasible. Option #1 is my least preferred because of the likelihood of the data in the rdbms becoming out of sync with the cvs files. Option #3 is most preferred because the data are simple, so not much aggregating required and I suspect that mdx will be easier to parse than sql.
So, if i could produce my own jolap data source would it be possible to hook the saiku-ui tools up to it? Where would I look to find out the interface configuration details?
Many years ago, #ronaldbouman created xmondrian - the set of tools with the olap server, and web ui tools for xmla browsing and visualusation. But that project was not updating, and has no source code.
I just updated olap server and libraries to the latest versions.
You may get it here and build:.
https://github.com/Muritiku/xmondrian-build.
You may use web package as the example. The mondrian server works with the saiku-ui.
IMHO,
I would not be as confident as your are, because it took Julian Hyde more than a decade to build Mondrian (MDX->SQL) and Calcite (SQL), fulfilling your last two proposals.
You might simply consider using Calcite, or even better Dremio. Dremio has a JDBC interface, and can query directories of CSV files in SQL. I tested Saiku over Dremio successfully (with a schema based on two separate RDBMS). Just be careful to setup tables' schema accordingly in the Mondrian v4 schema.
Best regards,
Fabrice Etanchaud
Dremio
I noticed that in the mapping level we are creating the ODBC connection and in the Workflow level we are creating the Relational connection. What are those 2 connections needed for?
Question is crystal clear.
When you create the mapping you are describing what you want the data to do and shouldn't be restricted by whether the data model exists yet or not.
In order to do this you need to know the structure of your source and target but you don't need to actually connect to them. Having a dummy csv to get you going in mapping designer and while the dba builds the tables in the database is enough.
In the designer you may connect to existing structure to create Source or Target transformations. But you just create the structure, the definition - it's not connected to the mapping anymore.
In the workflow designer you choose a connection that should process data strutured in a way described by the Source or Target definition. It's the connection that will be used to access the data.
Let's imagine a standard situation.
Having the current DB schema in a working state, I would like to create a snapshot of this state of schema objects, name it SNAP_1.
Then if updated schema and got problems (bugs or unstable work of new code) it would be good to switch quickly - in one command - the whole schema code back to SNAP_1.
I'm wondering is there any built-in feature of Oracle DBMS for versioning:
PL/SQL code (schema objects)
Data (for example, within configuration tables)
Does Oracle DBMS give native tools for versioning at least one of these two?
The answer is no. But Oracle 11.2+ has something called "Editions".
This method has many restrictions. For example, data and table structure cannot be versioned.
Cool thing is that separate sessions can use a different version of the DB objects simultaneously. (package before fix and after fix)
Here is oracle's documentation.EDITION and
Examples of editions
For security reasons I asked DB team to add EXTPROC_DLLS:ONLY; but they said this:
"Please be informed that the KEY = EXTPROC1526 doesn’t refer to any
external process at all. This is just a key used by any process needs
to call Oraxxx via IPC protocol. The key can be any value and the same
key value should be passed via the tnsnames.ora"
To me, it seems wrong. Could you please help me on this? What is the exact use of EXTPROC and what happens if we don't add EXTPROC_DLLS:ONLY?
For any program to connect the oracle database you need Extproc agent.
PLS/SQL for example needs Extproc to work with oracle
You can find more information about the securit here
Ill past some of the link
Description
***********
The Oracle database server supports PL/SQL, a programming language. PL/SQ can execute external procedures via extproc. Over the past few years there has been a number of vulnerabilities in this area.
Extproc is intended only to accept requests from the Oracle database server but local users can still execute commands bypassing this restriction.
Details
*******
No authentication takes place when extproc is asked to load a library and execute a function. This allows local users to run commands as the Oracle user (Oracle on unix and system on Windows). If configured properly, under 10g, extproc runs as nobody on *nix systems so the risk posed here is minimal but still present.
and an example here
On contrary to other databases Oracle does NOT allow plugins to access it's own memory address space. In case of MySQL/PostgreSQL a .dll plugin (C stored procedure) is loaded by the main database process.
Oracle lets listener to spawn a new process by calling extproc (or extproc32). This process loads the shared library and the rest of the database talks to this process via IPC.
This approach is safer, because the external library can not crash the database nor corrupt data. On the other hand sometimes C stored procedures might be slower than Java ones.
This option can restrict path for .dlls being loaded by extproc. i.e. those created by CREATE LIBRARY statement.
PS: usage of C stored procedures is VERY rare, if you do not use them you can freely remove the whole extproc stanza from listener.ora.
PS1: there is possible scenario of exploiting the extproc feature.
User must have CREATE LIBRARY, which usually NOT granted
extproc is not configured to run with nobody's privs - but runs as oracle:dba
User creates malicious .so library, which will performs something "evil" during it's initialization.
User puts this lib into /tmp directory
User creates Oracle LIBRARY pointing into /tmp by using CREATE LIBRARY statement
User forces extproc to dlopen this library
exproc will execute evil code with OS privileges oracle:dba
When using this EXTPROC_DLLS:ONLY restriction, developers have to cooperate with DBAs, and only white-listed libraries can be used and loaded.
We are creating an XPages application with an MySQL backend. Application will be used by several customers. Each has their own NSF database and a corresponding MySQL database. Each customer will have their own MySQL username. We are using the Extension Library JDBC components (ConnectionManager).
We were planning to store the username and password in a NotesDocument. This way the NSF's design can be easily updated from NTF template without affecting this data. However, the ConnectionManager component and #GetJdbcConnection SSJS function both read the username, password and other connection info from a file stored in WEB-INF/jdbc folder. Files stored there will be overwritten when the NSF design is updated, thus losing the customer-specific information.
There seems to be no way of making these files dynamic (WEB-INF is by specification read-only) or to include dynamic elements inside them (see my previous question).
We could use a dynamic JDBC URL in the ConnectionManager, however the ExtLib book warns against this practice. It seems that we then lose the connection pooling. And besides, the #GetJdbcConnection function does not accept JDBC URLS.
So, what is the best way of storing NSF-specific JDBC connection information?
EDIT: SOLVED
I created a subclass of the jdbcConnectionManager component. The procedure is detailed here: http://lazynotesguy.net/blog/2013/08/09/subclassing-an-extlib-component/
Best is to subclass the Extlib classes. Since the extlib source is provided it should be not too hard.
The other option is to use the version control system to maintain branches per customer that only differ in that config file