Oracle get values from :new, :old dynamically by string key - oracle

How to get a value from special :new or :old by "string key"?
e.g. in PHP:
$key = 'bar';
$foo[$key]; //get foo value
How to in Oracle?
:new.bar --get :new 'bar' value
and
key = 'bar';
:new[key] --How to?
Is it possible?
Thx!

It is not possible.
A trigger that fires at row level can access the data in the row that
it is processing by using correlation names. The default correlation
names are OLD, NEW, and PARENT.
...
OLD, NEW, and PARENT are also
called pseudorecords, because they have record structure, but are
allowed in fewer contexts than records are. The structure of a
pseudorecord is table_name%ROWTYPE, where table_name is the name of
the table on which the trigger is created (for OLD and NEW) or the
name of the parent table (for PARENT).
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/triggers.htm#autoId4
So, these correlation names are basically records. Record is not a key-value storage so you cannot reference its by string key.
Here's what you can do with them:
http://docs.oracle.com/cd/E11882_01/appdev.112/e10472/composites.htm#CIHFCFCJ

According to this
The first approach is syntactically good, it is to to be used like this:
create trigger trg_before_insert before insert on trigger_tbl
for each row
begin
insert into trigger_log (txt) values ('[I] :old.a=' || :old.a || ', :new.a='||:new.a);
end;
/
But if you want to access the field dynamically, one ugly thing that I can think of, that seems to be working (which is not dynamic in the end at all...): using a CASE WHEN... statement for each column you want to be able to use dynamically...
Something along these lines (updating the :new record):
key='bar';
value = 'newValue';
CASE key
WHEN 'bar' THEN :new.bar = value;
WHEN 'foo' THEN :new.foo = value;
WHEN 'baz' THEN :new.baz = value;
END;
To read a value from "dynamic column":
key='bar';
value = CASE key
WHEN 'bar' THEN :new.bar;
WHEN 'foo' THEN :new.foo;
WHEN 'baz' THEN :new.baz;
END;
Then use the value variable as required...
Beware however, as #beherenow noted:
what's the datatype of value variable in your reading example?
and how can you be sure you're not going to encounter a type mismatch?
These are questions that require decisions from the implementing side. For example, with a squint, this thing could be used to dynamically use values from columns that share the same type.
I have to emphasize though, that I don't see a situation where such a bizarre contraption I proposed is to be used, nor do I support using this. The reason I kept it here, after #beherenow's complete and definitive answer is so that everyone finding this page can see - though there might be a way, it shouldn't be used...
To me, this thing seems:
ugly
brittle
badly scaling
appalling
difficult to maintain
...aaand horribly ugly...
I definitely recommend rethinking the use case you need this for. I myself would angrily shout with someone writing this kind of code, unless this is absolutely the only way, and the whole universe collapses, if this is not done this way... (This is very much unlikely though)
Sorry if I misunderstood your question, it was not totally clear to me

Related

How to extract a value trom rowtype using a dynamyc fieldname?

I have a function that it receives in input a %ROWTYPE and a variable which contains the name of field in ROWTYPE
For example, my ROWTYPE contains 3 fields
data_row as data%ROWTYPE
data_row.Code
data_row.Description
data_row.Value
And the fieldName variable contains 'Description'
Where is the most simple way in order to extract a value from data_row.Description?
Regrards,
Marco
You can't refer to the record fields dynamically, at least without jumping through a lot of hoops using dynamic SQL and either requerying the underlying table or creating extra objects.
A simple way to do this is just with a case statement:
case upper(fieldName)
when 'CODE' then
-- do something with data_row.code
when 'DESCRIPTION' then
-- do something with data_row.description
when 'VALUE' then
-- do something with data_row.value
else
-- possibly indicate an error
end case;
The record field references are now static, it's just the decision of which to look at that is decided at runtime.
db<>fiddle demo
You may need to do some extra work to handle the data types being different; in that demo I'm relying on implicit conversion of everything to strings, which is kind of OK for numbers (or integers anyway) but still not ideal, and certainly not a good idea for dates. But how you handle that will depend on what your function will do with the data.

What does '%TYPE' mean following a parameter in procedure?

I am very new to PL/SQL and tried searching for this online with no avail - I would appreciate any help!
I am looking at a procedure that is something along the lines of this:
PROCEDURE pProcedureOne
(pDateOne DATE,
pDateTwo tableA.DateTwo%TYPE,
pDateThree tableB.DateThree%TYPE,
pTypeOne tableC.TypeOne%TYPE,
pTestId tableD.TestIdentifier%TYPE DEFAULT NULL,
pShouldChange BOOLEAN DEFAULT FALSE)
IS
What does '%TYPE' keyword mean in this context?
tableA.DateTwo%TYPE means "the data type of the DateTwo column in the tableA table". You'll see this referred to as an "anchored type" in documentation.
Using anchored types is quite useful for a couple of reasons
If the data type of the table changes, the code automatically gets compiled with the new data type. This eliminates the issue where, say, a varchar2(100) column in a table gets modified later to allow varchar2(255) and you have to look through dozens or hundreds of methods that reference that column to make sure that their local variables are updated to be long enough.
It documents what data you expect to be passed in to a procedure or for a local variable to reference. In large systems, you generally have at least a few concepts that have very similar names but that represent slightly different concepts. If you look at a procedure that has a parameter tableA.DateTwo%TYPE, that can be very useful information if there is a different DateTwoPrime column that represents a slightly different date.
It means to use the data type of the table.column you are referencing. So for example, if tableC.TypeOne is VARCHAR2(10), then that is the datatype assigned to pTypeOne.
It means that the data type of, for example, pDateTwo is to be the same as the data type of tableA.DateTwo.
%TYPE means the field type does not have to be defined because it is going to inherit it from the field's type.
So pDateTwo doesn't require its own type definition because it will have to same type as tableA.DateTwo.

How to avoid inserting the wrong data type in SQLite tables?

SQLite has this "feature" whereas even when you create a column of type INTEGER or REAL, it allows you to insert a string into it, even a string without numbers in it, like "the quick fox jumped over the lazy dog".
How do you prevent this kind of insertions to happen in your projects?
I mean, when my code has an error that leads to that kind of insertions or updates, I want the program to give an error, so I can debug it, not simply insert garbage in my database silently.
You can implement this using the CHECK constraint (see previous answer here). This would look like
CREATE TABLE T (
N INTEGER CHECK(TYPEOF(N) = 'integer'),
Str TEXT CHECK(TYPEOF(Str) = 'text'),
Dt DATETIME CHECK(JULIANDAY(Dt) IS NOT NULL)
);
The better and safer way is write a function (isNumeric, isString, etc) that validates the user input...

Parsing large txt files in ruby taking a lot of time?

below is the code to download a txt file from internet approx 9000 lines and populate the database, I have tried a lot but it takes a lot of time more than 7 minutes. I am using win 7 64 bit and ruby 1.9.3. Is there a way to do it faster ??
require 'open-uri'
require 'dbi'
dbh = DBI.connect("DBI:Mysql:mfmodel:localhost","root","")
#file = open('http://www.amfiindia.com/spages/NAV0.txt')
file = File.open('test.txt','r')
lines = file.lines
2.times { lines.next }
curSubType = ''
curType = ''
curCompName = ''
lines.each do |line|
line.strip!
if line[-1] == ')'
curType,curSubType = line.split('(')
curSubType.chop!
elsif line[-4..-1] == 'Fund'
curCompName = line.split(" Mutual Fund")[0]
elsif line == ''
next
else
sCode,isin_div,isin_re,sName,nav,rePrice,salePrice,date = line.split(';')
sCode = Integer(sCode)
sth = dbh.prepare "call mfmodel.populate(?,?,?,?,?,?,?)"
sth.execute curCompName,curSubType,curType,sCode,isin_div,isin_re,sName
end
end
dbh.do "commit"
dbh.disconnect
file.close
106799;-;-;HDFC ARBITRAGE FUND RETAIL PLAN DIVIDEND OPTION;10.352;10.3;10.352;29-Jun-2012
This is the format of data to be inserted in the table. Now there are 8000 such lines and how can I do an insert by combining all that and call the procedure just once. Also, does mysql support arrays and iteration to do such a thing inside the routine. Please give your suggestions.Thanks.
EDIT
I have to make insertion's into the tables depending on whether they are already exist or not, also I need to make use of conditional comparison's before inserting into the table. I definitely can't write SQL statements for these, so I wrote SQL stored procedures. Now I have a list #the_data, how do I pass that to the procedure and then iterate through it all on MySQL side. Any ideas ?
insert into mfmodel.company_masters (company_name) values
#{#the_data.map {|str| "('#{str[0]}')"}.join(',')}
this makes 100 insertions but 35 of them are redundant so I need to search the table for existing entries before doing a insertion.
Any Ideas ? thanks
From your comment, it looks like you are spending all your time executing DB queries. On a recent Ruby project, I also had to optimize some slow code which was importing data from CSV files into the database. I got about a 500x performance increase by importing all the data by using a single bulk INSERT query, rather than 1 query for each row of the CSV file. I accumulated all the data in an array, and then built a single SQL query using string interpolation and Array#join.
From your comments, it seems that you may not know how to build and execute dynamic SQL for a bulk INSERT. First get your data in a nested array, with the fields to be inserted in a known order. Just for an example, imagine we have data like this:
some_data = [['106799', 'HDFC FUND'], ['112933', 'SOME OTHER FUND']]
You seem to be using Rails and MySQL, so the dynamic SQL will have to use MySQL syntax. To build and execute the INSERT, you can do something like:
ActiveRecord::Base.connection.execute(<<SQL)
INSERT INTO some_table (a_column, another_column)
VALUES #{some_data.map { |num,str| "(#{num},'#{str}')" }.join(',')};
SQL
You said that you need to insert data into 2 different tables. That's not a problem; just accumulate the data for each table in a different array, and execute 2 dynamic queries, perhaps inside a transaction. 2 queries will be much faster than 9000.
Again, you said in the comments that you may need to update some records rather than inserting. That was also the case in the "CSV import" case which I mentioned above. The solution is only slightly more complicated:
# sometimes code speaks more eloquently than prose
require 'set'
already_imported = Set.new
MyModel.select("unique_column_which_also_appears_in_imported_files").each do |x|
already_imported << x.unique_column_which_also_appears_in_imported_files
end
to_insert,to_update = [],[]
imported_data.each do |row|
# for the following line, don't let different data types
# (like String vs. Numeric) get ya
# if you need to convert the imported data to match correctly against what's
# already in the DB, do it!
if already_imported.include? row[index_of_unique_column]
to_update << row
else
to_insert << row
end
end
Then you must build a dynamic INSERT and a dynamic UPDATE for each table involved. Google for UPDATE syntax if you need it, and go wild with all your favorite string processing functions!
Going back to the sample code above, note the difference between numeric and string fields. If it is possible that the strings may contain single quotes, you will have to make sure that all the single quotes are escaped. The behavior of String#gsub may be surprise you when you try to do this: it assigns a special meaning to \'. The best way I have found so far to escape single quotes is: string.gsub("'") { "\\'" }. Perhaps other posters know a better way.
If you are inserting dates, make sure they are converted to MySQL's date syntax.
Yes, I know that "roll-your-own" SQL sanitization is very iffy. There may even be security bugs with the above approach; if so, I hope my better-informed peers will set me straight. But the performance gains are just too great to ignore. Again, if this can be done using a prepared query with placeholders, and you know how, please post!
Looking at your code, it looks like you are inserting the data using a stored procedure (mfmodel.populate). Even if you do want to use a stored procedure for this, why do you have dbh.prepare in the loop? You should be able to move that line outside of lines.each.
You might want to try exporting the data as csv and loading it with 'load data infile... replace'. It seems cleaner/easier than trying to construct bulk insert queries.

How do I create a case sensitive query in Oracle forms?

I have a block based on a table. If I enter "12345" in enter query mode, it creates a query with
WHERE my_field = '12345'
If I enter "12345A", it goes
WHERE (upper(my_field) = '12345A' AND my_field like '12%')
which is bad, because my_field is indexed normally (not on upper(my_field)). I have tried toggling "Case restriction" attribute between mixed and upper, and "Case insensitive query" between yes and no, nothing seems to help. I also have a block level PRE-QUERY trigger (trigger starts with a RETURN; statement) set on override, so nothing should mess with the formation of the query, yet it still messes up.
Any ideas on what else I could try?
EDIT:
There was an obscure function call within WHEN_NEW_FORM_INSTANCE trigger to some attached library that reset all trigger block's items to CASE_SENSITIVE_QUERY = TRUE. Never would have guessed.
Not sure how the query is getting changed to that form;
WHERE (upper(my_field) = '12345A' AND my_field like '12%'
First check that there are no enter query or prequery triggers in the form. Somebody might have attached a trigger at a higher level. Oracle is not that smart to rewrite the query.Check that you are tying to a table not a view or stored procedure,...
If all else fails, enable the query triggers in the data black and rewrite the where clause yourself. It is pretty straightforward.
Give version of oracle forms before you post.
The
my_field like '12%'
Uses the index. The subset is then filtered with
upper(my_field) = '12345A'
So it might not be as bad as you think....
The most naive question, Can you update the column so it's all uppercase? I mean would it cause some inconvenience to your app?
If you can, it could be handled with a database trigger to ensure it's allways uppercase.
If you can't, then I suggest you create another field that you keep updated to uppercase with a database trigger.
You can also create a function index so it's upper(my_field).

Resources