Virtual Column Issue with Oracle Forms - oracle

I have an Interface which's created with Oracle Forms. It has a base table block in which there's a field (namely col3"has the same name with the table's column that's derived from").
Form has two other fields col1 and col2 as of numeric type, and those fields are also members of the same table having col3 column mentioned above.
I converted col3 to a virtual column as the sum of the columns col1 and col2 in the table's definition( col3 number generated always as (nvl(col1,0)+nvl(col2,0)) virtual )
When I try to commit the changes in the base table it hurls the error message
ORA-54013: INSERT operation disallowed on virtual columns
I know applying a DML to a virtual column has no sense, but even I changed the col3 field's
Query Only property from No to Yes
Update Allowed property from Yes to No
Insert Allowed property from Yes to No
I get the same error message. I don't want to write explicit DML statements, since the form has lots of other fields, and they should also be listed in those statements.
Do you have any idea how I can overcome this problem without giving up base table block structure ? ( I'm using the version Fusion Middleware 11g )

As everything you do with that column is query, how about setting it to be a non-database column? It would require writing a POST-QUERY trigger, though - otherwise you wouldn't see its value after executing a query.
:block.col3 := :block.col1 + :block.col2;
Put the same code into WHEN-VALIDATE-ITEM triggers on both :block.col1 and :block.col2 items, so that you'd calculate the result when inserting/updating values.
Alternatively, create a procedure (within the form), put the above code into it, and then call the procedure from those triggers - for a long term, that's probably a better choice as you'd have to maintain code changes only in the procedure.

Related

Is insert statement without column approve performance in Oracle

I'm using Oracle 12.
When you define insert statement there is option not to state column list
If you omit the column list altogether, then the values_clause or query must specify values for all columns in the table.
Also it's describe in Ask TOM when suggesting a best performance solution for bulk:
insert into insert_into_table values ( data(i) );
My question, is not stating columns really produce a better or at least equal performance than stating column in statement as
insert table A (col1, col2, col3) values (?, ?, ?);
From my experience there is no gain in omitting column names - also it's a bad practice to do so, since if column order changes (sometimes people do that, for clarity, but they really don't need to) and their definition allows to insert the data, you will get wrong data in wrong columns.
As a rule of thumb it's not worth the trouble. Always specify column list that you're putting values into. Database has to check that anyways.
Related: SQL INSERT performance omitting field names?
Best practice is, that you ALWAYS define the columns in insert statements, NOT for the performance sake(there is no difference), but for situation like this:
You create table test1 with columns col1, col2;
You insert data in that table in your procedures/etc, without naming the columns;
You add new columns, col3, col4;
The current logic will fail with insert, errors will be raised.
So, to avoid the failure, always name the columns, then your code doesn't brake,when you modify the table structure.

why there is always some null values when I use "insert into" in hive?

I created a table and define the data type of every field which is the same with the source table. When I use "insert into table select ..." to fulfill data in this new table, there is no debug error. And I am sure the 'productid' field has no null value in source table which is bigint type. But after the inserting, I find a little amount of records' productid is null. I also try stored as textfile and parquet. It makes no sense that there is still null values in outcome table.
However, when I use "creata table as select .... from ...", there is no null in the outcome productid.
So I don't know where is the problem?
Thanks.
This happens mostly when the actual data which you are trying to load have different datatype than destination Hive table column data type in DDL OR if length is smaller in target table. Check productid missing value in terms of actual data in it and length Vs defined in DDL statement.

Is there any major performance issues if I use virtual columns in Oracle?

Is there any major performance issues if I use virtual columns in an Oracle table?
We have a scenario where the db has fields stored as strings. Since other production apps run off those fields we can't easily convert them.
I am tasked with generating reports from the same db. Since I need to be able to filter by dates (which are stored as strings) it was brought to my attention that we could create a virtual date field so that I can query against that.
Has anyone ran into any roadblocks with this approach?
A virtual column is defined using an expression that is evaluated when you select from the table. There is no performance hit on inserts/updates on the table.
For example:
create table t1 (
datestr varchar2(100),
datedt date generated always as (to_date(datestr,'YYYYMMDD'))
);
Table created.
SQL> insert into t1 (datestr) values ('20160815');
1 row created.
SQL> insert into t1 (datestr) values ('xxx');
1 row created.
SQL> commit;
Commit complete.
Note that I was able to insert an invalid date value into datestr. Now we can try to select the data:
SQL> select * from t1 where datedt = date '2016-08-15';
ERROR:
ORA-01841: (full) year must be between -4713 and +9999, and not be 0
This could be a problem for you if you can't guarantee all the strings hold valid dates.
As for performance, when you run the above query what you are really running is:
select * from t1 where to_date(datestr,'YYYYMMDD') = date '2016-08-15';
So the query will not be able to use an index on the datestr column (probably), and you may want to add an index on the virtual column. Again, this won't work if any of the strings don't contain valid dates.
Another consideration is potential impact on existing code. Hopefully you won't have any code like insert into t1 values (...); i.e. not specifying the column list. If you do you will get the error:
ORA-54013: INSERT operation disallowed on virtual columns

tracking changes with trigger - alternate field names

Using the trigger noted below, I am tracking changes in a producution table in an audit, or change-log table. My problem is that the field names in the tracking table are different from the ones in table1. The values are the same, but the names of the columns are different.
The question is then, how must the syntax change in the trigger to take the value of one field name and insert it into a field of a different name in the tracking table?
Thank you for any and all help or suggestions.
{
CREATE OR REPLACE TRIGGER track_change_trg
AFTER INSERT OR UPDATE OR DELETE
ON table1
FOR EACH ROW
BEGIN
 
IF INSERTING THEN
INSERT INTO tracking table VALUES
(:new.pname, :new.p_id, :new.p_type, :new.t1name,
'INSERTED', SYSDATE);
 
ESLIF UPDATING THEN
INSERT INTO tracking table VALUES
(:new.pname, :new.p_id, :new.p_type, :new.t1name,
'UPDATED', SYSDATE);
 
ELSIF DELETING THEN
INSERT INTO tracking table VALUES
(:old.pname, :old.p_id, :old.p_type, :old.t1name,
'DELETED', SYSDATE);
 
END IF;
END;
/
}
It makes no difference if the column names are different in the main and audit table. I'm not sure why you think that's a problem - showing any error might have helped clarifying your issue, along with the table definitions. The only error I can immediately see being thrown - assuming the space in tracking table is a mistake in transcription - is if the column order doesn't match and you're putting the wrong type or size of data in a column. Hard to guess what you're seeing though.
You've omitted the the optional column section in the insert statement that would normally list the column names. Without an explicit list of the column names, the values will be assigned based on the order of the columns in the target table, as shown in user_tab_columns.column_id or with describe. It would be better to list the columns to avoid ambiguity. and so you don't have problems if the table definition changes (e.g. a column is added, so you not don't have enough values) or the column order is different in another environment (which arguably shouldn't happen under decent source control). It's easier to spot trivial mistakes too.
Anyway, just list the column names from the table you're inserting into:
INSERT INTO tracking_table (x_name, x_id, x_type, x_t1name, x_action, x_when)
VALUES (:new.pname, :new.p_id, :new.p_type, :new.t1name, 'INSERTED', SYSDATE);
... replacing x_name etc. with the actual column names from tracking_table.

Function-based index when not using index access

I'm using a function-based index with a user-defined function for the first time, and have stumbled across a performance problem when the index can't be used.
Internally, a function-based index seems to generate a hidden table column (of type varchar2(4000), since my function returns a varchar2), and indexes that. That works fine when the index is used, but sometimes we have to do a full table scan using the function as a filter, and in that case I see a performance degradation by a factor of 6. Seems in that case, Oracle does not use the hidden column, but recomputes the function for each row, make the query CPU-bound instead of IO-bound.
Is there a way to make Oracle use that hidden column also for filtering? I wonder if I'm missing some rewrite options or something along those lines.
If not, I'll have to define the column myself and using a trigger to keep it up to date. I'd prefer using the function-based index for transparency and easier maintanance.
Which version of Oracle are you using? If it's 11g you should try using a virtual column. This is a column whose value is derived from an expression or a literal. They're defined as part of the table, so they have a visibility in a table DESC (unlike a function-based index). We can build indexes on virtual columns. And they are maintained automatically, without the need for a trigger.
So you can add a virtual column to your table using the same expression as your function based index. Perhaps like this:
create table t23
(id number
, col_a varchar2(10)
, vcol_a as (upper(substr(col_a, 1, 1)))
)
/
Note that we cannot insert or update a virtual column. So you need to specify the projection of the insert statement:
insert into t23 (id, col_a) values (1, 'this is a test');
Then you can build a regular index on the virtual column:
create index t23_vc_i on t23(vcol_a)
/
Don't forget to drop your function based index!
There is no general way a function-based index can be used in a table scan.
The assumption I made in my question, namely "Internally, a function-based index seems to generate a hidden table column ...", is simply wrong: the results of the function are not stored in a table column, but only in the index.
So, unless there is a way to access the index when performing the scan (the only way I can think of is if it's a combined index starting with the key column(s)), the precomputed function result can't be used.
The 11g "virtual column" feature does not help either, as the column is not stored in the table, but computed on-the-fly, similar to using the function in a view.
In summary: if you can't rule out table scans, and your function call is expensive (slow), use a real column in combination with a "before insert or update" trigger. A function-based index won't do.
(Note: Added this answer because I did not want to let this question stand as unanswered. The credit for the answer belongs to thilo, who pointed out that the column is never materialized).

Resources