I’m using Doctrine to connect to an Oracle database. At the moment I’m just testing locally.
However, when we move to production it seems there’s no way to specify the IDENTITY sequence name so it’s mirred in production and local. If they’re not the same it will make things quite difficult when testing before sending to production.
Is there any way to specify the Sequence name for the IDENTITY column?
Data_default column provides the default value (long) of this column. If you convert it to string (using dbms_output.put_line), you will get the name of the seqence.
select data_default
from all_tab_columns
where table_name ='<TABLE_NAME>'
and column_name='<COLUMN_NAME>';
Related
I am working in an environment where we have separate tables for each client (this is something which I can't change due to security and other requirements). For example, if we have clients ACME and MEGAMART then we'd have an ACME_INFO table and MEGAMART_INFO tables and both tables would have the same structure (let's say ID, SOMEVAL1, SOMEVAL2).
I would like to have a way to easily access the different tables dynamically.
To this point I've dealt with this in a few ways including:
Using dynamic SQL in procedures/functions (not fun)
Creating a view which does a UNION ALL on all of the tables and which adds a CLIENT_ID COLUMN (i.e. "CREATE VIEW COMBINED_VIEW AS SELECT 'ACME' CLIENT_ID, ID, SOMEVAL1, SOMEVAL2 FROM ACME_INFO UNION ALL SELECT 'MEGMART' CLIENT_ID, ID, SOMEVAL1, SOMEVAL2") which performs surprisingly well, but is a pain to maintain and kind of defeats some of the requirements which dictate that we have separate tables for each client.
SYNONYMs won't work because we need different connections to act on different clients
A view which refers to a package which has a package variable for the active client. This is just evil and doesn't even work out all that well.
What I'd really like is to be able to create a table function, macro, or something else where I can do something like
SELECT * FROM FN_CLIENT_INFO('ACME');
or even
UPDATE (SELECT * FROM FN_CLIENT_INFO('ACME')) SET SOMEVAL1 = 444 WHERE ID = 3;
I know that I can partially achieve this with a pipelined function, but this mechanism will need to be used by a reporting platform and if the reporting platform does something like
SELECT * FROM FN_CLIENT_INFO('ACME') WHERE SOMEVAL1 = 4
then I want it to run efficiently (assuming SOMEVAL1 has an index for example). This is where a macro would do well.
Macros seem like a good solution, but the above won't work due to protections put in place to prevent against SQL injection.
Is there a way to create a macro that somehow verifies that the passed in VARCHAR2 is a valid table name and therefore can be used or is there some other approach to address what I need?
I was thinking that if I had a function which could translate a client name to a DBMS_TF.TABLE_T then I could use a macro, but I haven't found a way to do that well.
A lesser-known method for such cases is to use a system-partitioned table. For instance, consider the following code:
Full example: https://dbfiddle.uk/UQsAgHCk
create table t_common(a int, b int)
partition by system (
partition ACME_INFO,
partition MEGAMART_INFO
);
insert into t_common partition(acme_info)
values(1,1);
insert into t_common partition(megamart_info)
values(2,2);
commit;
select * from t_common partition(acme_info);
select * from t_common partition(megamart_info);
As demonstrated, a common table can be used with different partitions for different clients, allowing it to be used as a regular table. We can create a system-partitioned table and utilize the exchange partition feature with older tables. Then, we can drop the older tables and create views with the same names, so that older code continues to work with views while all new code can work with the common table by specifying a partition.
We have a legacy application we cannot modify that connects to Oracle 11g and at some point runs a query and returns a result. The application however is using the "generated" column name from Oracle to read the result.
Consider the following query:
select nvl(1,0.0) from DUAL;
As this query does not specify an alias, the generated column name would be "nvl(1,0.0)"
However on another server the generated column name is "nvl(1,0)" (notice 0 and not 0.0) and the application fails.
Is there a configuration that can be changed for Oracle? I've searched for formatting and locale configurations and they are equal on both servers.
Any help would be appreciated
It turns out there's a parameter called cursor_sharing that was set to FORCE instead of EXACT
select nvl(1,0.0) from DUAL;
The query above returns the following depending on the value of the parameter:
FORCE=NVL(1,0)
EXACT=NVL(1,0.0)
I'm currently porting a Code Igniter based application from MySQL to Oracle (11g) for a specific client. Both the MySQL and Oracle back-ends have to work in conjunction (i.e. we cannot drop the one or the other).
The MySQL DB uses around 100 tables of which ALL identifiers are in lowercase. When I migrate this DB to Oracle, using Oracle's SQL Developer tool, I end up with a 'properly' converted DB, but... with all uppercase identifiers.
Now, for normal usage this isn't really a problem, but the issue arises when using the CI Active Record class. It generates queries to the effect of:
SELECT "somecolumn" FROM "sometable" WHERE "someothercolumn" = somevalue
The issue is that when the " quotes are used for these identifiers, Oracle forces these identifiers to be interpreted in a case sensitive way, which in this case is wreaking havoc.
Patching the core code of CI and/or the application to either make all queries use case insensitive identifiers (i.e. by dropping the usage of the " quotes around the identifiers) or to convert all identifiers to uppercase ones on the fly, is IMO not desired, as a potential future framework upgrade is then compromised. Renaming ALL MySQL identifiers to become in uppercase is also a very unattractive scenario and has an even bigger impact on the application itself -- not an option for sure.
Instead, what I would like to achieve, is to have the migration process (i.e. using SQL Developer) simply respecting the case of the source DB and to perform the conversion exactly as it does up to now, with the exception that the identifiers do not get changed to their uppercase version.
I have searched a fair deal online to find a way to achieve this, and so far it has been to no avail.
Does anyone know if this can be done, and if so: how?
Is the conversion to all uppercase identifiers by any chance a global DB setting, perhaps?
I would have assumed this to be a trivial thing, but I haven't been able to figure it out so far and what little related references that I did come across do not sound very promising...
If you can acquire a schema script created by the database migration all you need to do is change the identifiers ( tablenames, view names, column names etc ) to be surrounded with double quotation marks. (I'm not sure if SQL Developer migrations actually has the option to preserve the case).
Without the quote marks Oracle will assume all identifiers are case-insensitive. However this is stored internally as upper case strings in the data dictionary. By using quote marks will force Oracle to use the exact case for the schema objects.
eg.
create table Customers
(
Name varchar2(100),
CreationDate date not null
);
will create CUSTOMERS internally in the data-dictionar and you can write queries like:
select name, creationdate from customers;
alternatively:
create table "Customers"
(
"Name" varchar2(100),
"CreationDate" date not null
);
will create "Customers" internally. You can only write queries using quotes and exact case:
select "Name", "CreationDate" from "Customers";
I have hit this before and simply edited oci8_driver.php (../system/database/oci) as follows:
// The character used for excaping
var $_escape_char = '"';
to
// The character used for excaping
var $_escape_char = '';
Stewart
In my company's product, we retrieve results a page at a time from the database. Because of this all filtering and sorting must be done on the database. One of the problems is coded values. For filtering and sorting to work properly, the code values need to be translated to locale specific labels in the query.
My plan is to use a table similar to the following:
t_code_to_label (
type varchar2(10),
locale varchar2(10),
code varchar2(10),
label varchar2(50)
)
The first three columns are the primary (unique) key.
When using this table, you would see something like this
select ent.name, ent.ent_type, entlabel.label as ent_type_label
from t_entitlements ent
join t_code_to_label entlabel on entlabel.type='entlabel' and entlabel.locale=currentLocale() and entlabel.code=ent.ent_type
The problem is that currentLocale() is something that I made up. How can I on the Java side of a JDBC connection do something to set the locale for the Connection object that I have that I can read on the Oracle side in a simple function. Ideally this is true locale support by Oracle but I can't find such a thing.
I am using Oracle 10g and 11g.
Are you talking about the NLS_LANGUAGE setting of the Oracle database? Would you (from the client side) like to dictate the usage of a particular NLS_LANGUAGE by Oracle database?
Then maybe this would work: Set Oracle NLS_LANGUAGE from java in a webapp
If you want an "all american" session you could do you could do something like:
ALTER SESSION SET NLS_LANGUAGE= 'AMERICAN' NLS_TERRITORY= 'AMERICA'
NLS_CURRENCY= '$' NLS_ISO_CURRENCY= 'AMERICA'
NLS_NUMERIC_CHARACTERS= '.,' NLS_CALENDAR= 'GREGORIAN'
NLS_DATE_FORMAT= 'DD-MON-RR' NLS_DATE_LANGUAGE= 'AMERICAN'
NLS_SORT= 'BINARY'
I am trying to perform SET operations in Oracle across remote databases.
I am using the MINUS operator.
My query looks something like this.
SELECT NAME FROM localdb MINUS SELECT NAME from remotedb#dblink
This is throwing up a ORA-12704 error. I understand this warrants some kind of conversion or a NLS Setting.
What should I try next?
The two name columns are stored in different characters sets. This could be because of their type definitions, or it could be because the two databases are using different character sets.
You might be able to get around this by explicitly converting the field from the remote database to the character set of the local one. Try this:
SELECT NAME FROM localdb MINUS SELECT TO_CHAR(NAME) from remotedb#dblink
It seams the types of NAME column in those 2 tables are different.
Make sure the NAME column in the remotedb table is exactly the same type as the NAME in localdb table. It is mandatory when you use a MINUS operator.