H2 set schema changes schema_search_path - h2

If I have a schema_search_path set and I wish to create a bunch of tables using a common script by setting the schema and not explicating the
schema in the table create (common script could be used in multiple schemas, this also sets the schema_search_path to just the specified schema.
This seems like an undesirable side affect.

Value set by SET SCHEMA_SEARCH_PATH is not affected by any other commands.
But this value is only used when an identified is not qualified with the schema and an object with this name doesn't exist in the current schema (affected by SET SCHEMA command).
For example, tables referenced by non-qualified names are searched in the following order:
Tables of the current schema.
Local temporary tables. (Currently they also include query aliases from the WITH clauses, but this may be changed when somebody will implement a separate scope of identifiers for these views.)
Tables of each schema from SCHEMA_SEARCH_PATH, if any. When multiple schemas are specified, they order has a meaning, they are processed in the same order.
Legacy or compatibility tables, such as DUAL or SYSDUMMY1 in DB2 and Derby compatibility modes.
The first table matched by its name will be used.
This is a complex case, for the most types of database objects only steps (1) and (3) are performed.
If you think that something is not going as described here and you can create a standalone test case (Java / JDBC / SQL only, no third-party libraries), you can create a bug report on GitHub:
https://github.com/h2database/h2database/issues

Related

2 databases on the one H2 instance

I have JPA Repository with #Query that joins tables from other DBs, that located on the same server.
SELECT id,co.name from Agenc a inner join[other_db_mame].[schema_name].[table_name] co .....
I want to write integration tests to cover the flow with this query.
In intégration tests I use H2 DB
And my question is, how to correctly configure H2 DB to have 2 DBs and make this query work?
Maybe there is a way, to create another db via scripts, or smth like this?
H2 supports direct access only to one database at once, but you can create linked tables to tables from other databases.
To create a linked table, you can use a CREATE LINKED TABLE command:
https://h2database.com/html/commands.html#create_linked_table
CREATE LINKED TABLE targetTableName('', 'jdbcURL', 'username', 'password', 'sourceTableName');
You can also link the whole schema with LINK_SCHEMA function:
https://h2database.com/html/functions.html#link_schema
CALL LINK_SCHEMA('targetSchemaName', '', 'jdbcURL', 'username', 'password', 'sourceSchemaName');
Note that format of fully-qualified table name in H2 (and in the SQL Standard) is catalogName.schemaName.tableName. H2 supports only one catalog (and its name is the same as a name of database) and you can't define additional catalogs. Non-standard -syntax with [identifier] is not accepted by H2 unless you use a MSSQLServer compatibility mode. In this mode you can use that syntax, but you can't have different names of catalogs anyway, so if they are fixed in your application, you have a problem.
Actually H2 can ignore the specified name of catalog if IGNORE_CATALOGS setting is set to TRUE:
https://h2database.com/html/commands.html#set_ignore_catalogs
SET IGNORE_CATALOGS TRUE;
But if combinations of schema and table name aren't unique in your original configuration, there is nothing to do with H2. You can't create different tables with the same schema and table name in H2 in any way.

Oracle user DB export command's scope (User/Schema level)?

I'm totally novice in terms of Oracle DB knowledge. Trying to understand IMPDB command and its scope.
Issue: Suppose there are 500 tables in a particular DB, many of them (60% - 70% or more) are coming as zero records when we're importing the data into a fresh Oracle DB (getting the data from one vendor who has the DB). The doubt is, how can most of the tables be zero records in a DB (why were they created at the first place then?). Also, we're assuming maybe the vendor is using a specific user while generating the .DMP files who has no access to those tables and hence the 0 count. When we asked the vendor, they said, that's not how Oracle works, they've provided user export dump and said, "Schema is a collection of database objects owned by a specific user. Those objects include tables, indexes, views, functions, stored procedures, etc."
When asked about the zero records issue, they said they're pulling correctly and have no understanding as to why so many tables are zero. The SO community has great experts in Oracle DB, can anyone shed some light as to:
What might be the issue?
Is our assumption correct (i.e, that user doesn't have access to those tables which got zero records)?
What's the right way forward?
4) Anything else you want to add.
The vendor is correct - the utility used to generate the export, EXPDP (the compliment to IMPDP) can create a full dump of all of the database objects of a specific user. However, the parameters used to generate the export can vary greatly, and it's absolutely possible for an export to not include table data IF the EXPDP command/parameters used to create the export are specified in that way. For example, let's imagine that someone wants to export a specific schema using the following commmand:
expdp [USER]#[DATABASE] schemas=test directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log query=TEST.TABLE:'"WHERE row_date>sysdate"'
While the export is being generated, all of the rows in that specific table will be evaluated based on the where condition. Unless rows have a date that is in the future, none of the rows dated prior and up to the sysdate will be exported. If a where condition like that is applied to the entire export, you'll have tables with 0 rows in the dump file.
That is just an example - it might also be the case that the tables really have 0 rows. This is possible for a lot of reasons - perhaps it is an older schema with tables that have previously been truncated. Perhaps that particular database isn't used often, and the tables within the schema are empty because rows were never added to the tables. Maybe a developer or another DBA created a bunch of unnecessary tables and they simply were never dropped. It could be a plethora of potential reasons/issues for a schema to have empty tables, and that doesn't mean there is something wrong with the database or the export file being generated. Applications and their technical requirements change all the time, and it's possible that the schema simply wasn't updated when those tables were no longer needed.
The first thing I would recommend is:
Ask the vendor to provide record counts of each table in that schema from their end for validation purposes. This will tell you if the tables are empty in the database. If they are empty in the database, they will be empty in your export. This is very simple and can be achieved with a query like select owner, table_name, num_rows, sample_size, last_analyzed from all_tables where owner=[SCHEMA]; provided that their table statistics are up to date.
If this is a big concern for you, you can always ask them to exclude those tables in the export with a command like:
expdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Or simply exclude them during your import with a command like:
impdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Either way should work, but be careful and ensure that there will be no issues from a constraint/child record perspective. You can also exclude the constraints. There are many ways to work around it.
IF THERE ARE INCONSISTENCIES BETWEEN THE COUNTS AND THE ROWS IMPORTED, I would recommend asking the vendor for the specific EXPDP command or parameter file that was used to generate the export. This will let you know if the empty rows are being caused by a clause in the export command.
It's impossible to know if your assumption is correct without knowing more about the database the export is coming from or seeing the the commands being used to generate the export. I would ask the vendor to verify record counts before assuming that it's a permission issue. Empty tables are created all the time.

Liquibase Oracle: Generate changelog tries to create objects from another schema

I want to generate a changelog XML from an existing Oracle schema, let's name it A. This schema contains references to another schema, schema B. Tables in schema A for example contain foreign keys referencing tables in schema B. User A has only SELECT and REFERENCES privileges on the tables of schema B.
When I try to create a database changelog for schema A, tables and constraints from B are included, even though they are not owned by user A. Is there any way to change this behavior? I tries to set the defaultCatalogName, defaultSchemaName, changelogCatalogName and changelogSchemaName parameters, but nothing changed.
It should work the way you expect, but due to bug https://liquibase.jira.com/browse/CORE-1784 it is not working as expected. It is fixed for the upcoming 3.2.0 release, probably out in mid March.

Do two users access the same database or different?

I installed Oracle on my system, so now orcl is the SID, which is the unique identifier of my database instance.
Now starter db was created as part of the installation. I created 2 users user1 and user2 using the system account.
Using SQL developer I am accessing the users, this shows me 2 different connections with all the database objects like tables, stored procedures views etc.
so
When using these 2 users, am I accessing the same database? I am giving all the ddl commands by logging into the user1 or user 2, does all this data goes into the same .dbf file?
The database instance can be connected to only one database, then does this essentially mean that everytime I create a new database, to make a database instance to point to that, I need to do a configuration change?
In my experience with Oracle, the typical unit of division is a schema. Schemas in Oracle are used more like you would use databases in SQL Server or PostgreSQL. They represent both users and a logical separation of objects. Physical separation would usually be done using tablespaces. Tablespaces are a group of physical files where data is stored. Schemas can share or use different tablespaces. Having one tablespace per schema is uncommon; they usually share a few tablespaces or often even just one.
With that in mind, to answer your questions more directly,
1) Like in any other database, you can specify the schema the object belongs to:
CREATE TABLE MY_SCHEMA.TABLE_X ( X NUMBER )
If the schemas on two CREATE statements are different, then it will create different objects. What's different in Oracle is that the default schema changes for every user. The default schema is always the currently connected schema/user. So if you omit the schema like so:
CREATE TABLE TABLE_X ( X NUMBER )
then the implied schema is the currently connected schema/user. So if I'm logged in as MY_SCHEMA, then the above is equivalent to the first example. When connecting as two different users, then the implied schema will be different and the DDL is not equivalent between the two users. So running the same statement would create two different objects if you do not specify a schema.
The two objects may be stored in the same physical file if they are in the same tablespace. (They are most likely in the USERS tablespace if you did not create one explicitly and did not specify a different default tablespace when creating the schemas.) Regardless, they are still two completely separate objects.
If you specify the schema explicitly like in the first example, then the DDL is equivalent regardless of who executes it (although permissions may prevent some users from executing it). So it would result in creating the object once, and attempting to create it a second time would result in an error unless you're using CREATE OR REPLACE or something similar.
2) I don't know the answer to this question, but as I said, in Oracle, the basic unit of separation is usually the schema, not a database. I believe the question you're asking is a large part of the reason why the schemas are used in the way they are. Having multiple actual databases on the same machine/instance is far more difficult in Oracle than in other databases (if not impossible), so it's much simpler to have a single database with many schemas.

Why regular oracle table support DML statements,but not the same for External table?

This is known to us that all DML statement has been supported by Oracle Regular Table but not the same for External Table? I tried below :
SQL> INSERT INTO xtern_empl_rpt VALUES ('70','Rakshit','Nantu','4587966214','na
tu.rakshit#ge.com','55');
INSERT INTO xtern_empl_rpt VALUES ('70','Rakshit','Nantu','4587966214','natu.ra
kshit#ge.com','55')
*
ERROR at line 1:
ORA-30657: operation not supported on external organized table
SQL> update xtern_empl_rpt set FIRST_NAME='Arup' where SSN='896743856';
update xtern_empl_rpt set FIRST_NAME='Arup' where SSN='896743856'
*
ERROR at line 1:
ORA-30657: operation not supported on external organized table
SQL>
So it seems External table not support this. But my question is - what the logical reason behind this design?
There is no mechanism in Oracle for locking rows in external tables, none of the concurrency controls which we get with regular heap tables. So updating is not allowed.
External tables created with the Oracle Loader driver are read only; the Datapump driver allows us to write to external table files but only in an CTAS mode.
The problem is that eternal tables are basically windows on OS files, without the layer of abstraction and control that internal tables offer. Basically, there is no way for the database to lock a record in an OS file, because the notion of a "record" is a databse thang, not an OS file thang.
External tables are designed for only one thing: data loading and unloading. They are simply not meant to be used with normal DML, and they're not really meant for normal selects either - that works, but if you need to do a lot of selections on an external table, you're "doing it wrong": load the data into proper tables, calculate statistics & add indexes as necessary.
Having external tables behave like normal tables would need that all the transactional machinery be implemented for them, which is very complex, and not worth it since that's not what they are meant for.
If you need normal tables and want to transplant them from one Oracle database to another, you should evaluate using transportable tablespaces too.
Limitations of external table are an obvious consequence of their being read-only; they are an adapter to involve in SQL queries either arbitrary record-organized files (ORACLE_LOADER type) or exported copies of tables in another database (ORACLE_DATAPUMP type).
As already mentioned, external tables are only good for full table scan queries; if one needs to use indexes in heavy duty queries or to modify foreign data sets that have been imported from files, regular tables can be populated using the SQL Loader tool.

Resources