What does '%TYPE' mean following a parameter in procedure? - oracle

I am very new to PL/SQL and tried searching for this online with no avail - I would appreciate any help!
I am looking at a procedure that is something along the lines of this:
PROCEDURE pProcedureOne
(pDateOne DATE,
pDateTwo tableA.DateTwo%TYPE,
pDateThree tableB.DateThree%TYPE,
pTypeOne tableC.TypeOne%TYPE,
pTestId tableD.TestIdentifier%TYPE DEFAULT NULL,
pShouldChange BOOLEAN DEFAULT FALSE)
IS
What does '%TYPE' keyword mean in this context?

tableA.DateTwo%TYPE means "the data type of the DateTwo column in the tableA table". You'll see this referred to as an "anchored type" in documentation.
Using anchored types is quite useful for a couple of reasons
If the data type of the table changes, the code automatically gets compiled with the new data type. This eliminates the issue where, say, a varchar2(100) column in a table gets modified later to allow varchar2(255) and you have to look through dozens or hundreds of methods that reference that column to make sure that their local variables are updated to be long enough.
It documents what data you expect to be passed in to a procedure or for a local variable to reference. In large systems, you generally have at least a few concepts that have very similar names but that represent slightly different concepts. If you look at a procedure that has a parameter tableA.DateTwo%TYPE, that can be very useful information if there is a different DateTwoPrime column that represents a slightly different date.

It means to use the data type of the table.column you are referencing. So for example, if tableC.TypeOne is VARCHAR2(10), then that is the datatype assigned to pTypeOne.

It means that the data type of, for example, pDateTwo is to be the same as the data type of tableA.DateTwo.

%TYPE means the field type does not have to be defined because it is going to inherit it from the field's type.
So pDateTwo doesn't require its own type definition because it will have to same type as tableA.DateTwo.

Related

How to extract a value trom rowtype using a dynamyc fieldname?

I have a function that it receives in input a %ROWTYPE and a variable which contains the name of field in ROWTYPE
For example, my ROWTYPE contains 3 fields
data_row as data%ROWTYPE
data_row.Code
data_row.Description
data_row.Value
And the fieldName variable contains 'Description'
Where is the most simple way in order to extract a value from data_row.Description?
Regrards,
Marco
You can't refer to the record fields dynamically, at least without jumping through a lot of hoops using dynamic SQL and either requerying the underlying table or creating extra objects.
A simple way to do this is just with a case statement:
case upper(fieldName)
when 'CODE' then
-- do something with data_row.code
when 'DESCRIPTION' then
-- do something with data_row.description
when 'VALUE' then
-- do something with data_row.value
else
-- possibly indicate an error
end case;
The record field references are now static, it's just the decision of which to look at that is decided at runtime.
db<>fiddle demo
You may need to do some extra work to handle the data types being different; in that demo I'm relying on implicit conversion of everything to strings, which is kind of OK for numbers (or integers anyway) but still not ideal, and certainly not a good idea for dates. But how you handle that will depend on what your function will do with the data.

navitaging through ref in oracle

I'm working with Oracle 11gR2, and i'm having doubts with objects which contains REFS. Watching the next picture:
And knowing that the that the table emp_ps is a table of emp_typ, i can't understand how the sentence in the picture is correct. Shouldn't be the ref fields unreachable through the "." operator? I thought that i had to deref the value of "e.dept" into an aux varibale of emp_typ, and then and only then I would be able to navigate through the fields of aux!
I think you only need DEREF to return the object as a whole.
Documentation allows implicit dot-dereferencing, multi-level nested too.

How to implement dynamic datasets in a SSRS report

I have the following scenario: a single .rdl file with a stored procedure as datasource. This stored procedure accepts two parameters: #ProcedureName nvarchar(max) and #Parameters xml. The functionality of the stored procedure is to call another stored procedure (most probably on a different database) with the given XML parameters. So, in essence, each of the stored procs that gets executed will return it's own dataset.
How would I go about creating a tablix/matrix that consumes the dataset without specifying the columns as the columns need to get generated at runtime?
Unfortunately, SSRS doesn't have "AutoGenerateColumns"-style functionality and resolves a number of things at design time. So the short answer is that you cannot.
The designer checks field references when saving, and will not save with a reference to a field that isn't in a dataset's field list. If a field ceases to exist after the report definition is generated, it will show up as a static blank value on the report. Expressions will do so as well, even if the field is in an unevaluated portion. So if field B is removed, this expression would still be affected:
=IIF(1=1,Fields!A.Value,Fields!B.Value)
Which means that you can't use conditional grouping expressions as a workaround, even if you had an exhaustive list of the columns that might be returned.

PostgreSQL: Create index on length of all table fields

I have a table called profile, and I want to order them by which ones are the most filled out. Each of the columns is either a JSONB column or a TEXT column. I don't need this to a great degree of certainty, so typically I've ordered as follow:
SELECT * FROM profile ORDER BY LENGTH(CONCAT(profile.*)) DESC;
However, this is slow, and so I want to create an index. However, this does not work:
CREATE INDEX index_name ON profile (LENGTH(CONCAT(*))
Nor does
CREATE INDEX index_name ON profile (LENGTH(CONCAT(CAST(* AS TEXT))))
Can't say I'm surprised. What is the right way to declare this index?
To measure the size of the row in text representation you can just cast the whole row to text, which is much faster than concatenating individual columns:
SELECT length(profile::text) FROM profile;
But there are 3 (or 4) issues with this expression in an index:
The syntax shorthand profile::text is not accepted in CREATE INDEX, you need to add extra parentheses or default to the standard syntax cast(profile AS text)
Still the same problem that #jjanes already discussed: only IMMUTABLE functions are allowed in index expressions and casting a row type to text does not pass this requirement. You could build a fake IMMUTABLE wrapper function, like Jeff outlined.
There is an inherent ambiguity (that applies to Jeff's answer as well!): if you have a column name that's the same as the table name (which is a common case) you cannot reference the row type in CREATE INDEX since the identifier always resolves to the column name first.
Minor difference to your original: This adds column separators, row decorators and possibly escape characters to the text representation. Shouldn't matter much to your use case.
However, I would suggest a more radical alternative as crude indicator for the size of a row: pg_column_size(). Even shorter and faster and avoids issues 1, 3 and 4:
SELECT pg_column_size(profile) FROM profile;
Issue 2 remains, though: pg_column_size() is also only STABLE. You can create a simple and cheap SQL wrapper function:
CREATE OR REPLACE FUNCTION pg_column_size(profile)
RETURNS int LANGUAGE sql IMMUTABLE AS
'SELECT pg_catalog.pg_column_size($1)';
and then proceed like #jjanes outlined. More details:
Does PostgreSQL support "accent insensitive" collations?
Note that I created the function with the row type profile as parameter. Postgres allows function overloading, which is why we can use the same function name. Now, when we feed the matching row type to pg_column_size() our custom function matches more closely according to function type resolution rules and is picked instead of the polymorphic system function. Alternatively, use a separate name and possibly make the function polymorphic as well ...
Related:
Is there a way to disable function overloading in Postgres
You can declare a function which is falsely marked "immutable" and build an index on that.
CREATE OR REPLACE FUNCTION len_immut(record)
RETURNS int
LANGUAGE plperl
IMMUTABLE
AS $function$
## This function lies about its immutability.
## Use it with care. It is useful for indexing
## entire table rows.
return length(join ",", values %{$_[0]});
$function$
and then
create index on profile (len_immut(profile));
SELECT * FROM profile ORDER BY len_immut(profile) DESC;
Since the function is falsely labelled as immutable, the index may become out of date if you do things like add or drop columns on the table, or change the types of columns.

Single Database Call With Many Parameters vs Many Database Calls With Few Parameters

I am writing a Content Management System which can store meta-data about different document-types. Each document-type has its own set of meta-data fields. For example a Letter has fields like "To", "From", "ToAddress", "FromAddress" etc whereas a MinutesOfMeeting has fields like "DateHeldOn", "TimeHeldOn", "AttendedBy" etc.
I am saving this information in database in two tables: General and Specific. General store information which is common to all types such as DocumentOwnerName, DocumentCreatedDate, DocumentSize etc. Specific table is not one table but a set of 35 different tables, one for each document-type.
I have a page which contains a grid in which I show list of document. One record corresponds to one document. Since the grid is made to show documents of all types therefore first row may show a letter, second a MinutesOfMeeting, third a Memo etc.
I have also made a search feature where user can set criteria on basis of which documents list is retrieved. To make it work, there are four search-related parameters for each of the field in each of the specific tables, and all of these parameters are passed to a central procedure. This procedure then filter out records on basis of criteria.
The problem is, dealing with 35 different document-types, each having like 10 fields, I end up with more than a thousand parameters for the procedure. This is a maintenance nightmare. I am looking for a solution.
One solution is to deal with each of the specific table individually, getting back Ids, then union them. This is fine, except that I have to make 36 different calls to the database, one each for a specific table plus one for the general table.
It all boils down to a simple architecture choice: Should I make a single database call passing many parameters or should I make many database calls passing few parameters.
Which approach is more preferable and why?
Edit: The web-server and database-server are on the same machine. Therefore, network speed shouldn't matter.
When designing an API where I need a procedure to take a large number of related parameters, or even a variable list of parameters, I use record types, e.g.:
TYPE param_type IS RECORD (
To
From
ToAddress
FromAddress
DateHeldOn
TimeHeldOn
AttendedBy
);
PROCEDURE do_search (in_params IN param_type);
The structure of the record is up to you, of course. If the procedure is coded to ignore the record elements that are NULL, then all the caller needs to do is set those elements that are required, e.g.:
DECLARE
p param_type;
BEGIN
p.DateHeldOn := DATE '2012-01-01';
do_search(p);
END;

Resources