fast infix search and count on a 40 million rows table postgresql - performance

I'm new to database administration, but I need to create a database view, while the db admin requires it to run in 5 mins or less. My database is PostgreSQL 9.1.1 on RedHat4.4 linux 64-bit. I'm unsure about the hardware specifications. One of the tables is 40million rows. From the table, I have a column of directory paths from which I must group by about 20 string patterns and count its occurrences. The string pattern requires infix search, as it could be somewhere in the middle or end of the path. The string pattern also has a priority, as in when %str1% then 'str1', when %str2% then 'str2', and both str1, str2, str3, etc can occur on the same path, i.e.
path
/usr/myblock/str1/str2
/usr/myblock/something/str2
/usr/myblock/str1/something/str3
What I did so far was to build a table out of CASE statements then join it back to the original table by LIKE, then SELECT id, pattern, count(pattern). The query runtime was terrible, taking 5mins to retrieve from 5.5K rows. My query looks like this:
WITH a AS (
SELECT CASE
WHEN path ~ '^/usr/myblock/(.*)str1(.*)' THEN 'str1'
WHEN path ~ '^/usr/myblock/(.*)str2$' THEN 'str2'
WHEN path ~ '^/usr/myblock/(.*)str3$' THEN 'str3'
.... --multiple other case conditions
WHEN path ~ '^/usr/myblock/' THEN 'others'
ELSE 'n/a'
END as flow
FROM mega_t WHERE left(path,13)='/usr/myblock/' limit 5)
SELECT id, a.flow, count(*) AS flow_count FROM a
JOIN mega_t ON path LIKE '%' || a.flow || '%'
WHERE (some_conditions) AND to_timestamp(test_runs.created_at::double precision)
> ('now'::text::date - '1 mon'::interval) --collect last 1 month's results only
GROUP BY id, a.flow;
My expected output for that simple case would be:
id | flow | flow_count
1 | str1 | 2
2 | str2 | 1
What is a better way to search for substrings like this and count occurrences? I can't use ts_stat, nor 'SELECT count(path) WHERE path LIKE %str1%' because of the if-else priority it needs. I read about creating trigram indexes, but I think that is overkill for my patterns. I hope this question is clear and useful. Another thing I should add is that the 40million rows table is updated frequently every few seconds or minutes while the view will be accessed every eight hours daily.

Related

Bound Oracle Text "near" operator to the same sentences

I have a column that stores paragraphs with multiple sentences and I am using the "Near" statement to look for the right record. However, is it possible to bound the near statement that only look for words within the same sentences.
For example:
Paragraph
"An elderly man has died as a result of coronavirus in the Royal
Hobart Hospital overnight. It follows the death of a woman in her 80s
in the North West Regional Hospital in Burnie on Monday morning, and
brings the national toll to 19. "
indextype is ctxsys.context
select
score(1)
from
tbl
where
contains(Paragraph,
'Near (coronavirus, death),20,false)',1) > 0
The result I want is nothing as both words is from a different sentences. However, now it will return me a positive number as both words are less than 20 words apart.
Can you share me some idea on how to do this?
Thanks in advance!
The query should look like this:
select score(1)
from tbl
where contains(Paragraph, 'Near (coronavirus, death),20,false)
WITHIN SENTENCE',1) > 0
;
That is - use the WITHIN operator.
Note that you must tell the index to recognize sentences first. That is: if you created the index with a statement like this:
create index ctxidx on tbl(Paragraph)
indextype is ctxsys.context
-- parameters(' ... ')
;
where the parameters (if you used that clause) don't say anything about "sentences", you will get an error if you try the query above - something along the lines of
DRG-10837: section sentence does not exist
First you will have to define "special" sections for sentences:
begin
ctx_ddl.create_section_group('my_section_group', 'AUTO_SECTION_GROUP');
ctx_ddl.add_special_section('my_section_group', 'SENTENCE');
end;
/
With this in hand:
drop index ctxidx;
create index ctxidx on tbl(Paragraph)
indextype is ctxsys.context
parameters ('section group my_section_group')
;
Now you are ready to successfully run the query at the top of this Answer.

Oracle fuzzy text search with wildcards

I've got a SAP Oracle database full with customer data.
In our custom CRM it is quite common to search the for customers using wildcards. In addtion to the SAP standard search, we would like to do some fuzzy text searching for names which are similar to the entered name.
Currently we're using the UTL_MATCH.EDIT_DISTANCE function to search for similar names. The only disadvantage is that it is not possible to use some wildcard patterns.
Is there any possiblity to use wildcards in combination with the UTL_MATCH.EDIT_DISTANCE function or are there different(or even better) approaches to do that?
Let's say, there are the following names in the database:
PATRICK NOR
ORVILLE ALEX
OWEN TRISTAN
OKEN TRIST
The query could look like OKEN*IST* and both OWEN TRISTAN and OKEN TRISTAN should be returned. OKEN would be a 100% match and OWEN less.
My current test-query looks like:
SELECT gp.partner, gp.bu_sort1, UTL_MATCH.edit_distance(gp.bu_sort1, ?) as edit_distance,
FROM but000 gp
WHERE UTL_MATCH.edit_distance(gp.bu_sort1, ?) < 4
This query works fine except if wildcards * are used within the search string (which is quite common).
Beware of the implications of your approach in terms of performances. Even if it "functionally" worked, with UTL_MATCH you can only filter the results obtained by an internal table scan.
What you likely need is an index on such data.
Head to Oracle Text, the text indexing capabilities of Oracle. Bear in mind that they require some effort to be put at work.
You might juggle with the fuzzy operator, but handle with care. Most oracle text features are language dependent (they take into account the English dictionary, German, etc..).
For instance
-- create and populate the table
create table xxx_names (name varchar2(100));
insert into xxx_names(name) values('PATRICK NOR');
insert into xxx_names(name) values('ORVILLE ALEX');
insert into xxx_names(name) values('OWEN TRISTAN');
insert into xxx_names(name) values('OKEN TRIST');
insert into xxx_names(name) values('OKENOR SAD');
insert into xxx_names(name) values('OKENEAR TRUST');
--create the domain index
create index xxx_names_ctx on xxx_names(name) indextype is ctxsys.context;
This query would return results that you'd probably like (input is the string "TRST")
select
SCORE(1), name
from
xxx_names n
where
CONTAINS(n.name, 'definescore(fuzzy(TRST, 1, 6, weight),relevance)', 1) > 0
;
SCORE(1) NAME
---------- --------------------
1 OWEN TRISTAN
22 OKEN TRIST
But with the input string "IST" it would likely return nothing (in my case this is what it does).
Also note that in general, inputs of less than 3 characters are considered non-matching by default.
You'll possibly get a more "predictable" outcome if you take off the "fuzzy" requirement and stick to finding rows that just "contains" the exact sequence you passed in.
In this case try using a ctxcat index, which, by the way supports some wildcards (warning: supports multi columns, but a column cannot exceed 30 chars in size!)
-- create and populate the table
--max length is 30 chars, otherwise the catsearch index can't be created
create table xxx_names (name varchar2(30));
insert into xxx_names(name) values('PATRICK NOR');
insert into xxx_names(name) values('ORVILLE ALEX');
insert into xxx_names(name) values('OWEN TRISTAN');
insert into xxx_names(name) values('OKEN TRIST');
insert into xxx_names(name) values('OKENOR SAD');
insert into xxx_names(name) values('OKENEAR TRUST');
begin
ctx_ddl.create_index_set('xxx_names_set');
ctx_ddl.add_index('xxx_names_set', 'name');
end;
/
drop index xxx_names_cat;
CREATE INDEX xxx_names_cat ON xxx_names(name) INDEXTYPE IS CTXSYS.CTXCAT
PARAMETERS ('index set xxx_names_set');
The latter, with this query would work nicely (input is "*TRIST*")
select
UTL_MATCH.edit_distance(name, 'TRIST') dist,
name
from
xxx_names
where
catsearch(name, '*TRIST*', 'order by name desc') > 0
;
DIST NAME
---------- --------------------
7 OWEN TRISTAN
5 OKEN TRIST
But with the input "*O*TRIST*" wouldn't return anything (for some reasons).
Bottom line: text indexes are probably the only way to go (for performance) but you have to fiddle quite a bit to understand all the intricacies.
References:
fuzzy search: Oracle Text CONTAINS Query Operators
catsearch : Oracle Text SQL Statements and Operators
Assuming "wildcard" means an asterisk, you want a name that matches all specified letters to rank highest, with more specified letters matching better than less, otherwise rank by edit distance similarity.
using the placeholder ? for your search term, try this:
select *
from mytable
order by case
when name like '%' || replace(?, '*', '%') || '%' then 0 - length(replace(?, '*', ''))
else 100 - UTL_MATCH.edit_distance_similarity(?, name) end
fetch first 10 rows
FYI all "like" matches have a negative number for their ordering with magnitude the number of letters specified. All like misses have a non-negative ordering number with magnitude of the percentage difference. In all cases, a lower number is a better match.

Oracle SQL: Select rows from table A with fallback to joined table A and B. (union, group by,...)

The requirement may seem a bit odd, but bear with me: Lets say I have a list of my employees like this:
pid name
-------------------------
1 Smith-Gordon
2 Hansen
3 Simpson
And a table of previous names (if e.g. Mrs Smith-Gordon and Mr Hansen had one or more different names before they were married, respectively), employeehist:
pid oldname
-------------------------
1 Smith
2 Taylor
2 Baker
What I want now is to be able to search for names and get results from both tables like this:
a) Search for "Simpson%" -> Get a result like "3, Simpson"
b) Search for "Hansen%" -> Get a result like "2, Hansen"
c) Search for "Taylor%" -> Get a result like "2, Hansen, matched on previous Taylor"
d) Search for "Smith%" -> Get a result like "1, Smith-Gordon"
In other words, I want the current record, plus the old name if that was where the pertinent match occurred.
What I tried so far:
1) Naively join the history to the current employees: The searches b), c) and d) will always contain something in the oldname column, so I can't tell where the match occurred. I also get duplicate hits for Mr Hansen.
2) I tried to UNION a first select on employees (containing a dummy NULL AS oldname) with a second select joining employeehist with employees which will return me a nice hit for search b) without an oldname and one with an oldname for c), but now I predictably get duplicates in d).
Any thoughts?
You can use the following query with a parameter:
SELECT e.pid,
CASE
WHEN e.name LIKE :search_key THEN e.name
WHEN eh.oldname LIKE :search_key THEN e.name || ' matched on previous ' || eh.oldname
END
FROM employees e
LEFT JOIN employeehist eh on (e.pid = eh.pid)
WHERE e.name LIKE :seach_key OR eh.oldname LIKE :search_key
I have come up with this solution:
SELECT * FROM ( /* (3) outer filter query */
SELECT e.pid, e.name, /* (1) query combining current and matching old names */
CASE
WHEN e.name LIKE :search_key THEN 'Y'
ELSE 'N'
END AS primary_match,
(
SELECT oldname /* (2) subquery that gives me one or no matching old name */
FROM employeehist eh
WHERE eh.pid = e.pid
AND eh.oldname LIKE :search_key
AND ROWNUM=1
)
FROM employees e
) combined
WHERE combined.primary_match = 'Y' OR combined.oldname IS NOT NULL;
There's one primary select (1) that gets me all current ids and names, and adds a CASE column whether the name matched. Additionally, it runs a subquery (2) that gets me one matching old name (also if there are several, or none if none). With that on hand I can use an outer select (2) that will filter away rows with no matches.
This would return e.g. for search key "Smith%"
pid | name | primary_match | oldname
1 | Smith-Gordon | Y | Smith
or for "Taylor%"
pid | name | primary_match | oldname
2 | Hansen | N | Taylor
I'm not sure how elegant it is, but it works as I want:
I get one result per matching current pid, no matter how many old names that pid has, matching or not. No duplicates.
I can distinguish between results that matched on the current name and those that ("only" or "also") matched on old names.
I don't need to define my matching condition twice because it gets rolled into that CASE column and I can filter on that.
There's obviously room for improvement: The subquery (2) could be made to return an aggregate of all matching old names (or the newest or oldest, I have a column for that).
But this works for me.
I have found a better solution than my previous one. My problem was that I couldn't GROUP BY pid and "squash" differing oldname rows. I'm quite sure I remember that this was possible in MySQL, but Oracle always ever gave me "979: not a GROUP BY expression". Strict but fair.
The solution is apparently to provide Oracle with a strategy how to deal with those rows:
SELECT pid, name,
MIN(oldname) KEEP (DENSE_RANK FIRST ORDER BY oldname NULLS FIRST) as oldname
/*(3) outer select combines current and old hits, and "squashes" duplicates, preferring current hits where available*/
FROM (
SELECT e.pid, e.name, null AS oldname /*(1) hits in current names*/
FROM employees e
WHERE e.name LIKE :search_key
UNION ALL
SELECT e.pid, e.name, eh.oldname /* (2) hits in old names*/
FROM employeehist eh
JOIN employees e ON e.pid = eh.pid
WHERE eh.oldname LIKE :search_key
) combined
GROUP BY pid, name;
The idea is simple: Run a query (1) that gives all matches in current names (plus a dummy "oldname" column with NULLs), then a query (2) that gives all matches in old names (complete with their joined current names to display). Then simply combine those, and remove the duplicates by pid (and name, because Oracle, but that's identical by definition) giving preference to rows where oldname is NULL.
This would return e.g. for search key "Smith%"
pid | name | oldname
1 | Smith-Gordon | NULL
which is exactly what I want. If there's a pid with a current and an old match, I don't care about the old one. Or for "Taylor%":
pid | name | oldname
2 | Hansen | Taylor
This query also appears to be roughly 10 times faster than my other solution - I guess because it avoids subqueries that depend on the current pid.
So the only odd thing is that I need to use MIN(oldname) instead of some form of identity. I get that Oracle needs an aggregate function here, but the whole point of the KEEP ... FIRST exercise is to only have one row anyway, no?
But it works, and it's fast, so I won't complain.

Substring inside string

Suppose this is my table:
ID STRING
1 'ABC'
2 'DAE'
3 'BYYYYYY'
4 'H'
I want to select all rows that have at least one of the characters in the STRING column somewhere in another row's STRING variable.
For example, 1 and 2 have an A in common and 1 ad 3 have a B in common, but 4 does not have any characters in common with any of the other rows. So my query should return only the first three lines.
I don't need to know with which line it matched.
Thanks!
#A.B.Cade : Good solution but could be done without any distinct nor join.
SELECT * FROM test t1
WHERE EXISTS
(
SELECT * FROM test t2
WHERE t1.id<>t2.id AND
regexp_like(t1.string, '['|| replace(t2.string, '.[]', '\.\[\]')||']')
)
The query won't compare the string with extra rows since it'll stop the comparison as soon as 1 match is found for the current row...
See fiddle.
#GolezTrol's answer is a good one, but here is another approach:
select distinct t1."ID", t1."STRING"
from table1 t1, table1 t2
where t1."ID" <> t2."ID"
and regexp_like(t1."STRING", '['|| t2."STRING"||']')
First take a cartessian product of the table
Then make sure your not comparing the same string to itself
then create a regexp from one string for comparing to the other - [<string1>] means that the string must contain one of the letters in the [ ] which are all from string1
Here is a fiddle
Like this:
select distinct
id, name
from
(select distinct
x.id,
x.NAME,
length(x.NAME) as leng,
substr(x.name, level, 1) as namechar
from
YourTable x
start with
level = 0
connect by
level <= length(x.name)) y
where
exists
(select
'x'
from
YourTable z
where
instr(z.name, y.namechar) > 0 and
z.id <> y.id)
order by
id
What it does:
First, (inner select) use the table with a number generator that returns a number for each letter in the name. Now each record in YourTable is returned Length(Name) times, each with another number. That generated number is used to isolate that letter (substr).
Then (subselect in top level where clause) check if records exist that contain that isolated letter. Distinct is needed, because records are returned more than once if more than one letter matches. You could add namechar to the outer select field list to see the letter that match.

How to otimize select from several tables with millions of rows

Have the following tables (Oracle 10g):
catalog (
id NUMBER PRIMARY KEY,
name VARCHAR2(255),
owner NUMBER,
root NUMBER REFERENCES catalog(id)
...
)
university (
id NUMBER PRIMARY KEY,
...
)
securitygroup (
id NUMBER PRIMARY KEY
...
)
catalog_securitygroup (
catalog REFERENCES catalog(id),
securitygroup REFERENCES securitygroup(id)
)
catalog_university (
catalog REFERENCES catalog(id),
university REFERENCES university(id)
)
Catalog: 500 000 rows, catalog_university: 500 000, catalog_securitygroup: 1 500 000.
I need to select any 50 rows from catalog with specified root ordered by name for current university and current securitygroup. There is a query:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c, catalog_securitygroup cs, catalog_university cu
WHERE c.root = 100
AND cs.catalog = c.id
AND cs.securitygroup = 200
AND cu.catalog = c.id
AND cu.university = 300
ORDER BY name
) cc
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
Where 100 - some catalog, 200 - some securitygroup, 300 - some university. This query return 50 rows from ~ 170 000 in 3 minutes.
But next query return this rows in 2 sec:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c
WHERE c.root = 100
ORDER BY name
) cc
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
I build next indexes: (catalog.id, catalog.name, catalog.owner), (catalog_securitygroup.catalog, catalog_securitygroup.index), (catalog_university.catalog, catalog_university.university).
Plan for first query (using PLSQL Developer):
http://habreffect.ru/66c/f25faa5f8/plan2.jpg
Plan for second query:
http://habreffect.ru/f91/86e780cc7/plan1.jpg
What are the ways to optimize the query I have?
The indexes that can be useful and should be considered deal with
WHERE c.root = 100
AND cs.catalog = c.id
AND cs.securitygroup = 200
AND cu.catalog = c.id
AND cu.university = 300
So the following fields can be interesting for indexes
c: id, root
cs: catalog, securitygroup
cu: catalog, university
So, try creating
(catalog_securitygroup.catalog, catalog_securitygroup.securitygroup)
and
(catalog_university.catalog, catalog_university.university)
EDIT:
I missed the ORDER BY - these fields should also be considered, so
(catalog.name, catalog.id)
might be beneficial (or some other composite index that could be used for sorting and the conditions - possibly (catalog.root, catalog.name, catalog.id))
EDIT2
Although another question is accepted I'll provide some more food for thought.
I have created some test data and run some benchmarks.
The test cases are minimal in terms of record width (in catalog_securitygroup and catalog_university the primary keys are (catalog, securitygroup) and (catalog, university)). Here is the number of records per table:
test=# SELECT (SELECT COUNT(*) FROM catalog), (SELECT COUNT(*) FROM catalog_securitygroup), (SELECT COUNT(*) FROM catalog_university);
?column? | ?column? | ?column?
----------+----------+----------
500000 | 1497501 | 500000
(1 row)
Database is postgres 8.4, default ubuntu install, hardware i5, 4GRAM
First I rewrote the query to
SELECT c.id, c.name, c.owner
FROM catalog c, catalog_securitygroup cs, catalog_university cu
WHERE c.root < 50
AND cs.catalog = c.id
AND cu.catalog = c.id
AND cs.securitygroup < 200
AND cu.university < 200
ORDER BY c.name
LIMIT 50 OFFSET 100
note: the conditions are turned into less then to maintain comparable number of intermediate rows (the above query would return 198,801 rows without the LIMIT clause)
If run as above, without any extra indexes (save for PKs and foreign keys) it runs in 556 ms on a cold database (this is actually indication that I oversimplified the sample data somehow - I would be happier if I had 2-4s here without resorting to less then operators)
This bring me to my point - any straight query that only joins and filters (certain number of tables) and returns only a certain number of the records should run under 1s on any decent database without need to use cursors or to denormalize data (one of these days I'll have to write a post on that).
Furthermore, if a query is returning only 50 rows and does simple equality joins and restrictive equality conditions it should run even much faster.
Now let's see if I add some indexes, the biggest potential in queries like this is usually the sort order, so let me try that:
CREATE INDEX test1 ON catalog (name, id);
This makes execution time on the query - 22ms on a cold database.
And that's the point - if you are trying to get only a page of data, you should only get a page of data and execution times of queries such as this on normalized data with proper indexes should take less then 100ms on decent hardware.
I hope I didn't oversimplify the case to the point of no comparison (as I stated before some simplification is present as I don't know the cardinality of relationships between catalog and the many-to-many tables).
So, the conclusion is
if I were you I would not stop tweaking indexes (and the SQL) until I get the performance of the query to go below 200ms as rule of the thumb.
only if I would find an objective explanation why it can't go below such value I would resort to denormalisation and/or cursors, etc...
First I assume that your University and SecurityGroup tables are rather small. You posted the size of the large tables but it's really the other sizes that are part of the problem
Your problem is from the fact that you can't join the smallest tables first. Your join order should be from small to large. But because your mapping tables don't include a securitygroup-to-university table, you can't join the smallest ones first. So you wind up starting with one or the other, to a big table, to another big table and then with that large intermediate result you have to go to a small table.
If you always have current_univ and current_secgrp and root as inputs you want to use them to filter as soon as possible. The only way to do that is to change your schema some. In fact, you can leave the existing tables in place if you have to but you'll be adding to the space with this suggestion.
You've normalized the data very well. That's great for speed of update... not so great for querying. We denormalize to speed querying (that's the whole reason for datawarehouses (ok that and history)). Build a single mapping table with the following columns.
Univ_id, SecGrp_ID, Root, catalog_id. Make it an index organized table of the first 3 columns as pk.
Now when you query that index with all three PK values, you'll finish that index scan with a complete list of allowable catalog Id, now it's just a single join to the cat table to get the cat item details and you're off an running.
The Oracle cost-based optimizer makes use of all the information that it has to decide what the best access paths are for the data and what the least costly methods are for getting that data. So below are some random points related to your question.
The first three tables that you've listed all have primary keys. Do the other tables (catalog_university and catalog_securitygroup) also have primary keys on them?? A primary key defines a column or set of columns that are non-null and unique and are very important in a relational database.
Oracle generally enforces a primary key by generating a unique index on the given columns. The Oracle optimizer is more likely to make use of a unique index if it available as it is more likely to be more selective.
If possible an index that contains unique values should be defined as unique (CREATE UNIQUE INDEX...) and this will provide the optimizer with more information.
The additional indexes that you have provided are no more selective than the existing indexes. For example, the index on (catalog.id, catalog.name, catalog.owner) is unique but is less useful than the existing primary key index on (catalog.id). If a query is written to select on the catalog.name column, it is possible to do and index skip scan but this starts being costly (and most not even be possible in this case).
Since you are trying to select based in the catalog.root column, it might be worth adding an index on that column. This would mean that it could quickly find the relevant rows from the catalog table. The timing for the second query could be a bit misleading. It might be taking 2 seconds to find 50 matching rows from catalog, but these could easily be the first 50 rows from the catalog table..... finding 50 that match all your conditions might take longer, and not just because you need to join to other tables to get them. I would always use create table as select without restricting on rownum when trying to performance tune. With a complex query I would generally care about how long it take to get all the rows back... and a simple select with rownum can be misleading
Everything about Oracle performance tuning is about providing the optimizer enough information and the right tools (indexes, constraints, etc) to do its job properly. For this reason it's important to get optimizer statistics using something like DBMS_STATS.GATHER_TABLE_STATS(). Indexes should have stats gathered automatically in Oracle 10g or later.
Somehow this grew into quite a long answer about the Oracle optimizer. Hopefully some of it answers your question. Here is a summary of what is said above:
Give the optimizer as much information as possible, e.g if index is unique then declare it as such.
Add indexes on your access paths
Find the correct times for queries without limiting by rowwnum. It will always be quicker to find the first 50 M&Ms in a jar than finding the first 50 red M&Ms
Gather optimizer stats
Add unique/primary keys on all tables where they exist.
The use of rownum is wrong and causes all the rows to be processed. It will process all the rows, assigned them all a row number, and then find those between 0 and 50. When you want to look for in the explain plan is COUNT STOPKEY rather than just count
The query below should be an improvement as it will only get the first 50 rows... but there is still the issue of the joins to look at too:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c
WHERE c.root = 100
ORDER BY name
) cc
where rownum <= 50
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
Also, assuming this for a web page or something similar, maybe there is a better way to handle this than just running the query again to get the data for the next page.
try to declare a cursor. I dont know oracle, but in SqlServer would look like this:
declare #result
table (
id numeric,
name varchar(255)
);
declare __dyn_select_cursor cursor LOCAL SCROLL DYNAMIC for
--Select
select distinct
c.id, c.name
From [catalog] c
inner join university u
on u.catalog = c.id
and u.university = 300
inner join catalog_securitygroup s
on s.catalog = c.id
and s.securitygroup = 200
Where
c.root = 100
Order by name
--Cursor
declare #id numeric;
declare #name varchar(255);
open __dyn_select_cursor;
fetch relative 1 from __dyn_select_cursor into #id,#name declare #maxrowscount int
set #maxrowscount = 50
while (##fetch_status = 0 and #maxrowscount <> 0)
begin
insert into #result values (#id, #name);
set #maxrowscount = #maxrowscount - 1;
fetch next from __dyn_select_cursor into #id, #name;
end
close __dyn_select_cursor;
deallocate __dyn_select_cursor;
--Select temp, final result
select
id,
name
from #result;

Resources