How to extract a value trom rowtype using a dynamyc fieldname? - oracle

I have a function that it receives in input a %ROWTYPE and a variable which contains the name of field in ROWTYPE
For example, my ROWTYPE contains 3 fields
data_row as data%ROWTYPE
data_row.Code
data_row.Description
data_row.Value
And the fieldName variable contains 'Description'
Where is the most simple way in order to extract a value from data_row.Description?
Regrards,
Marco

You can't refer to the record fields dynamically, at least without jumping through a lot of hoops using dynamic SQL and either requerying the underlying table or creating extra objects.
A simple way to do this is just with a case statement:
case upper(fieldName)
when 'CODE' then
-- do something with data_row.code
when 'DESCRIPTION' then
-- do something with data_row.description
when 'VALUE' then
-- do something with data_row.value
else
-- possibly indicate an error
end case;
The record field references are now static, it's just the decision of which to look at that is decided at runtime.
db<>fiddle demo
You may need to do some extra work to handle the data types being different; in that demo I'm relying on implicit conversion of everything to strings, which is kind of OK for numbers (or integers anyway) but still not ideal, and certainly not a good idea for dates. But how you handle that will depend on what your function will do with the data.

Related

What does '%TYPE' mean following a parameter in procedure?

I am very new to PL/SQL and tried searching for this online with no avail - I would appreciate any help!
I am looking at a procedure that is something along the lines of this:
PROCEDURE pProcedureOne
(pDateOne DATE,
pDateTwo tableA.DateTwo%TYPE,
pDateThree tableB.DateThree%TYPE,
pTypeOne tableC.TypeOne%TYPE,
pTestId tableD.TestIdentifier%TYPE DEFAULT NULL,
pShouldChange BOOLEAN DEFAULT FALSE)
IS
What does '%TYPE' keyword mean in this context?
tableA.DateTwo%TYPE means "the data type of the DateTwo column in the tableA table". You'll see this referred to as an "anchored type" in documentation.
Using anchored types is quite useful for a couple of reasons
If the data type of the table changes, the code automatically gets compiled with the new data type. This eliminates the issue where, say, a varchar2(100) column in a table gets modified later to allow varchar2(255) and you have to look through dozens or hundreds of methods that reference that column to make sure that their local variables are updated to be long enough.
It documents what data you expect to be passed in to a procedure or for a local variable to reference. In large systems, you generally have at least a few concepts that have very similar names but that represent slightly different concepts. If you look at a procedure that has a parameter tableA.DateTwo%TYPE, that can be very useful information if there is a different DateTwoPrime column that represents a slightly different date.
It means to use the data type of the table.column you are referencing. So for example, if tableC.TypeOne is VARCHAR2(10), then that is the datatype assigned to pTypeOne.
It means that the data type of, for example, pDateTwo is to be the same as the data type of tableA.DateTwo.
%TYPE means the field type does not have to be defined because it is going to inherit it from the field's type.
So pDateTwo doesn't require its own type definition because it will have to same type as tableA.DateTwo.

PostgreSQL: Create index on length of all table fields

I have a table called profile, and I want to order them by which ones are the most filled out. Each of the columns is either a JSONB column or a TEXT column. I don't need this to a great degree of certainty, so typically I've ordered as follow:
SELECT * FROM profile ORDER BY LENGTH(CONCAT(profile.*)) DESC;
However, this is slow, and so I want to create an index. However, this does not work:
CREATE INDEX index_name ON profile (LENGTH(CONCAT(*))
Nor does
CREATE INDEX index_name ON profile (LENGTH(CONCAT(CAST(* AS TEXT))))
Can't say I'm surprised. What is the right way to declare this index?
To measure the size of the row in text representation you can just cast the whole row to text, which is much faster than concatenating individual columns:
SELECT length(profile::text) FROM profile;
But there are 3 (or 4) issues with this expression in an index:
The syntax shorthand profile::text is not accepted in CREATE INDEX, you need to add extra parentheses or default to the standard syntax cast(profile AS text)
Still the same problem that #jjanes already discussed: only IMMUTABLE functions are allowed in index expressions and casting a row type to text does not pass this requirement. You could build a fake IMMUTABLE wrapper function, like Jeff outlined.
There is an inherent ambiguity (that applies to Jeff's answer as well!): if you have a column name that's the same as the table name (which is a common case) you cannot reference the row type in CREATE INDEX since the identifier always resolves to the column name first.
Minor difference to your original: This adds column separators, row decorators and possibly escape characters to the text representation. Shouldn't matter much to your use case.
However, I would suggest a more radical alternative as crude indicator for the size of a row: pg_column_size(). Even shorter and faster and avoids issues 1, 3 and 4:
SELECT pg_column_size(profile) FROM profile;
Issue 2 remains, though: pg_column_size() is also only STABLE. You can create a simple and cheap SQL wrapper function:
CREATE OR REPLACE FUNCTION pg_column_size(profile)
RETURNS int LANGUAGE sql IMMUTABLE AS
'SELECT pg_catalog.pg_column_size($1)';
and then proceed like #jjanes outlined. More details:
Does PostgreSQL support "accent insensitive" collations?
Note that I created the function with the row type profile as parameter. Postgres allows function overloading, which is why we can use the same function name. Now, when we feed the matching row type to pg_column_size() our custom function matches more closely according to function type resolution rules and is picked instead of the polymorphic system function. Alternatively, use a separate name and possibly make the function polymorphic as well ...
Related:
Is there a way to disable function overloading in Postgres
You can declare a function which is falsely marked "immutable" and build an index on that.
CREATE OR REPLACE FUNCTION len_immut(record)
RETURNS int
LANGUAGE plperl
IMMUTABLE
AS $function$
## This function lies about its immutability.
## Use it with care. It is useful for indexing
## entire table rows.
return length(join ",", values %{$_[0]});
$function$
and then
create index on profile (len_immut(profile));
SELECT * FROM profile ORDER BY len_immut(profile) DESC;
Since the function is falsely labelled as immutable, the index may become out of date if you do things like add or drop columns on the table, or change the types of columns.

Oracle get values from :new, :old dynamically by string key

How to get a value from special :new or :old by "string key"?
e.g. in PHP:
$key = 'bar';
$foo[$key]; //get foo value
How to in Oracle?
:new.bar --get :new 'bar' value
and
key = 'bar';
:new[key] --How to?
Is it possible?
Thx!
It is not possible.
A trigger that fires at row level can access the data in the row that
it is processing by using correlation names. The default correlation
names are OLD, NEW, and PARENT.
...
OLD, NEW, and PARENT are also
called pseudorecords, because they have record structure, but are
allowed in fewer contexts than records are. The structure of a
pseudorecord is table_name%ROWTYPE, where table_name is the name of
the table on which the trigger is created (for OLD and NEW) or the
name of the parent table (for PARENT).
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/triggers.htm#autoId4
So, these correlation names are basically records. Record is not a key-value storage so you cannot reference its by string key.
Here's what you can do with them:
http://docs.oracle.com/cd/E11882_01/appdev.112/e10472/composites.htm#CIHFCFCJ
According to this
The first approach is syntactically good, it is to to be used like this:
create trigger trg_before_insert before insert on trigger_tbl
for each row
begin
insert into trigger_log (txt) values ('[I] :old.a=' || :old.a || ', :new.a='||:new.a);
end;
/
But if you want to access the field dynamically, one ugly thing that I can think of, that seems to be working (which is not dynamic in the end at all...): using a CASE WHEN... statement for each column you want to be able to use dynamically...
Something along these lines (updating the :new record):
key='bar';
value = 'newValue';
CASE key
WHEN 'bar' THEN :new.bar = value;
WHEN 'foo' THEN :new.foo = value;
WHEN 'baz' THEN :new.baz = value;
END;
To read a value from "dynamic column":
key='bar';
value = CASE key
WHEN 'bar' THEN :new.bar;
WHEN 'foo' THEN :new.foo;
WHEN 'baz' THEN :new.baz;
END;
Then use the value variable as required...
Beware however, as #beherenow noted:
what's the datatype of value variable in your reading example?
and how can you be sure you're not going to encounter a type mismatch?
These are questions that require decisions from the implementing side. For example, with a squint, this thing could be used to dynamically use values from columns that share the same type.
I have to emphasize though, that I don't see a situation where such a bizarre contraption I proposed is to be used, nor do I support using this. The reason I kept it here, after #beherenow's complete and definitive answer is so that everyone finding this page can see - though there might be a way, it shouldn't be used...
To me, this thing seems:
ugly
brittle
badly scaling
appalling
difficult to maintain
...aaand horribly ugly...
I definitely recommend rethinking the use case you need this for. I myself would angrily shout with someone writing this kind of code, unless this is absolutely the only way, and the whole universe collapses, if this is not done this way... (This is very much unlikely though)
Sorry if I misunderstood your question, it was not totally clear to me

Single Database Call With Many Parameters vs Many Database Calls With Few Parameters

I am writing a Content Management System which can store meta-data about different document-types. Each document-type has its own set of meta-data fields. For example a Letter has fields like "To", "From", "ToAddress", "FromAddress" etc whereas a MinutesOfMeeting has fields like "DateHeldOn", "TimeHeldOn", "AttendedBy" etc.
I am saving this information in database in two tables: General and Specific. General store information which is common to all types such as DocumentOwnerName, DocumentCreatedDate, DocumentSize etc. Specific table is not one table but a set of 35 different tables, one for each document-type.
I have a page which contains a grid in which I show list of document. One record corresponds to one document. Since the grid is made to show documents of all types therefore first row may show a letter, second a MinutesOfMeeting, third a Memo etc.
I have also made a search feature where user can set criteria on basis of which documents list is retrieved. To make it work, there are four search-related parameters for each of the field in each of the specific tables, and all of these parameters are passed to a central procedure. This procedure then filter out records on basis of criteria.
The problem is, dealing with 35 different document-types, each having like 10 fields, I end up with more than a thousand parameters for the procedure. This is a maintenance nightmare. I am looking for a solution.
One solution is to deal with each of the specific table individually, getting back Ids, then union them. This is fine, except that I have to make 36 different calls to the database, one each for a specific table plus one for the general table.
It all boils down to a simple architecture choice: Should I make a single database call passing many parameters or should I make many database calls passing few parameters.
Which approach is more preferable and why?
Edit: The web-server and database-server are on the same machine. Therefore, network speed shouldn't matter.
When designing an API where I need a procedure to take a large number of related parameters, or even a variable list of parameters, I use record types, e.g.:
TYPE param_type IS RECORD (
To
From
ToAddress
FromAddress
DateHeldOn
TimeHeldOn
AttendedBy
);
PROCEDURE do_search (in_params IN param_type);
The structure of the record is up to you, of course. If the procedure is coded to ignore the record elements that are NULL, then all the caller needs to do is set those elements that are required, e.g.:
DECLARE
p param_type;
BEGIN
p.DateHeldOn := DATE '2012-01-01';
do_search(p);
END;

iterating over linq entity column

i need to insert a record with linq
i have a namevaluecollection with the data from a form post..
so started in the name=value&name2=value2 etc.. type format
thing is i need to inset all these values into the table, but of course the table fields are typed, and i need to type up the data before inserting it
i could of course explicitly do
linqtableobj.columnproperty = convert.toWhatever(value);
but i have many columns in the table, and the data coming back from the form, doesnt always contain all fields in the table
thought i could iterate over the linq objects columns, getting their datatype - to use to convert the appropriate value from the form data
fine all good, but then im still stuck with doing
linqtableobj.columnproterty = converted value
...if there is one for every column in the table
foreach(col in newlinqrowobj)
{
newlinqobj[col] = convert.changetype(namevaluecollection[col.name],col.datatype)
}
clearly i cant do that, but anything like that possible.. or
is it possible to loop around the columns for the new 'record' setting the values as i go.. and i guess grabbing the types at that point to do the conversion
stumped i am
thanks
nat
If you have some data type with a hundred different properties, and you want to copy those into a completely different data type with a hundred different properties, then somehow somewhere in your code you are going to have to define a hundred different "mapping" instructions. It doesn't matter what framework you are using, or whether the "mapping" instructions are lines of C# code, XML elements, lambda functions, proprietary "stuff", or whatever. There's no getting away from it.
Bearing that in mind, having one line of code per property looks to me like the fastest, simplest, most readable and maintainable solution.
If I understood your problem correctly, you could use reflection (or dynamic code generation if it is performance sensitive) to circumvent your typing problems
There is a preety good description of how to do something like this at codeproject.
Basically you get a PropertyInfo for the property you want to set (if it's not a property I think you would need dynamic code generation) and use it's setValue method (after calling the appropriate Convert.ChangeType of course). This will basicall circumvent the whole static typing, so there you are.

Resources