How to get results from one's output? (loop within loop) - nested-loops

I am trying to find out the no. of bookings from a table where customer from City = Liverpool; but it seems to give me wrong results. What could have gone wrong?
Tables : scustom,
sbook.
Data : ABCtable type scustom,
BKcnt(4) type N.
Clear BKcnt.
Select * from scustom into ABCtable where city = 'Liverpool'.
Select * from sbook.
BKcnt = BKcnt + 1.
Endselect.
Write: / ABCtable-id,
15 ABCtable-name,
50 BKcnt.
ENDSELECT.

For "What could have gone wrong", see the diagnosis of #knut : You get the same number for each customer, since you always select the complete sbook table.
For questions like this, it is better to leave the aggregation, grouping etc. to the database. Try this version:
report zz_count_sbook.
parameters: p_city type scustom-city lower case default 'Liverpool'.
data: id type scustom-id,
name type scustom-name,
count type i.
select customid name count(*) into (id,name,count)
from scustom as c
join sbook as b
on b~customid = c~id
where city eq p_city
group by customid name.
write: / id,
15 name,
50 count.
endselect.

Your Select * from sbook. doesn't contain any additional condition. So you count in each loop all entries in sbook, not only the one connected to your entry in scustom.
I don't know tables, so I can't give you the correct select.
You count is not efficient, you don't need to count yourself:
Tables : scustom,
sbook.
Data : ABCtable type scustom,
BKcnt(4) type N.
Clear BKcnt.
Select * from scustom into ABCtable where city = 'Liverpool'.
Select count(*) into BKcnt from sbook
where <???> = ABCtable-<???>. "I don't know your keys, replace <???> with the correct fields.
Write: / ABCtable-id,
15 ABCtable-name,
50 BKcnt.
ENDSELECT.
(I hope my select count is correct. Please check it with the syntax checker. At least there is an aggregation function in ABAP!)

Always use FOR ALL ENTRIES to join two tables for better performance.
SELECT * FROM scustom INTO TABLE ABCtable WHERE city = 'Liverpool'
SELECT count(*) INTO BKcnt FROM sbook FOR ALL ENTRIES IN ABCtable
WHERE <???> = ABCtable-<???>

Related

Trying to display top 3 amount from a table using sql query in oracle 11g..column is of varchar type

Am trying to list top 3 records from atable based on some amount stored in a column FTE_TMUSD which is of varchar datatype
below is the query i tried
SELECT *FROM
(
SELECT * FROM FSE_TM_ENTRY
ORDER BY FTE_TMUSD desc
)
WHERE rownum <= 3
ORDER BY FTE_TMUSD DESC ;
o/p i got
972,9680,963 -->FTE_TMUSD values which are not displayed in desc
I am expecting an o/p which will display the top 3 records of values
That should work; inline view is ordered by FTE_TMUSD in descending order, and you're selecting values from it.
What looks suspicious are values you specified as the result. It appears that FTE_TMUSD's datatype is VARCHAR2 (ah, yes - it is, you said so). It means that values are sorted as strings, not numbers - and it seems that you expect numbers. So, apply TO_NUMBER to that column. Note that it'll fail if column contains anything but numbers (for example, if there's a value 972C).
Also, an alternative to your query might be use of analytic functions, such as row_number:
with temp as
(select f.*,
row_number() over (order by to_number(f.fte_tmusd) desc) rn
from fse_tm_entry f
)
select *
from temp
where rn <= 3;

Can I use FOR ALL ENTRIES with GROUP BY?

Currently the code looks something like this:
LOOP AT lt_orders ASSIGNING <fs_order>.
SELECT COUNT(*) AS cnt
FROM order_items
INTO <fs_order>-cnt
WHERE order_id = <fs_order>-order_id.
ENDLOOP.
It is the slowest part of the report. I want to speed it up.
How can I use FOR ALL ENTRIES with GROUP BY?
Check the documentation. You can't use GROUP BY. Maybe in this case, you could try selecting your items with FAE outside of the loop, then count them using a parallel cursor:
REPORT.
TYPES: BEGIN OF ty_result,
vbeln TYPE vbeln,
cnt TYPE i.
TYPES: END OF ty_result.
DATA: lt_headers TYPE SORTED TABLE OF ty_result WITH UNIQUE KEY vbeln,
lv_tabix TYPE sy-tabix VALUE 1.
"get the headers
SELECT vbeln FROM vbak UP TO 100 ROWS INTO CORRESPONDING FIELDS OF TABLE lt_headers.
"get corresponding items
SELECT vbeln, posnr FROM vbap FOR ALL ENTRIES IN #lt_headers
WHERE vbeln EQ #lt_headers-vbeln
ORDER BY vbeln, posnr
INTO TABLE #DATA(lt_items).
LOOP AT lt_headers ASSIGNING FIELD-SYMBOL(<h>).
LOOP AT lt_items FROM lv_tabix ASSIGNING FIELD-SYMBOL(<i>).
IF <i>-vbeln NE <h>-vbeln.
lv_tabix = sy-tabix.
EXIT.
ELSE.
<h>-cnt = <h>-cnt + 1.
ENDIF.
ENDLOOP.
ENDLOOP.
BREAK-POINT.
Or join header/item with a distinct count on the item id (whichever column that would be in your table).
You should be able to do something like
SELECT COUNT(order_item_id) AS cnt, order_id
FROM order_items
INTO CORRESPONDING FIELDS OF TABLE lt_count
GROUP BY order_id.
Assuming that order_item_id is a key in the order_items table. And assuming that lt_count has two fields: cnt of type int8 and order_id of same type as your other order_id fields
PS: then you can loop over lt_count and move the counts to lt_orders. Or the other way around. To speed up the loop, sort one of the tables and use READ ... BINARY SEARCH
I did with table KNB1 (customer master in company code), where we have customers, which are created in several company codes.
Please note, because of FOR ALL ENTRIES you have to SELECT the full key.
TYPES: BEGIN OF ty_knb1,
kunnr TYPE knb1-kunnr,
count TYPE i,
END OF ty_knb1.
TYPES: BEGIN OF ty_knb1_fae,
kunnr TYPE knb1-kunnr,
END OF ty_knb1_fae.
DATA: lt_knb1_fae TYPE STANDARD TABLE OF ty_knb1_fae.
DATA: lt_knb1 TYPE HASHED TABLE OF ty_knb1
WITH UNIQUE KEY kunnr.
DATA: ls_knb1 TYPE ty_knb1.
DATA: ls_knb1_db TYPE knb1.
START-OF-SELECTION.
lt_knb1_fae = VALUE #( ( kunnr = ... ) ). "add at least one customer which is created in several company codes
ls_knb1-count = 1.
SELECT kunnr bukrs
INTO CORRESPONDING FIELDS OF ls_knb1_db
FROM knb1
FOR ALL ENTRIES IN lt_knb1_fae
WHERE kunnr EQ lt_knb1_fae-kunnr.
ls_knb1-kunnr = ls_knb1_db-kunnr.
COLLECT ls_knb1 INTO lt_knb1.
ENDSELECT.
Create a range table for your lt_orders, like lt_orders_range.
Do select order_id, count( * ) where order_id in lt_orders_range.
If you think this is too much to create a range table, you will save a lot of performance by running just one select for all orders instead of single select for each order id.
Not directly, only through a CDS view
While all of the answers provide a faster solution than the one in the question, the fastest way is not mentioned.
If you have at least Netweaver 7.4, EHP 5 (and you should, it was released in 2014), you can use CDS views, even if you are not on HANA.
It still cannot be done directly, as OpenSQL does not allow FOR ALL ENTRIES with GROUP BY, and CDS views cannot handle FOR ALL ENTRIES. However, you can create one of each.
CDS:
#AbapCatalog.sqlViewName: 'zorder_i_fae'
DEFINE VIEW zorder_items_fae AS SELECT FROM order_items {
order_id,
count( * ) AS cnt,
}
GROUP BY order_id
OpenSQL:
SELECT *
FROM zorder_items_fae
INTO TABLE #DATA(lt_order_cnt)
FOR ALL ENTRIES IN #lt_orders
WHERE order_id = #lt_orders-order_id.
Speed
If lt_orders contains more than about 30% of all possible order_id values from table ORDER_ITEMS, the answer from iPirat is faster. (While using more memory, obviously)
However, if you need only a couple hunderd order_id values out of millions, this solution is about 10 times faster than any other answer, and 100 times faster than the original.

Understanding the behavior of dbms_random.value in where clause

I have a table with two columns(Using oracle 11g database) : Country, IndexNumber. Table contains 10 rows(10 different cities and with its unique index number.)
For example:
Country IndexNUmber
India 1
Australia 2
. .
. .
. .
. .
US 10
Now i want to fetch a random row from above table by generating random number using dbms_random.value(1,10). To achieve that i am using below query:
select * from tab_name where indexnumber = dbms_random.value(1,10);
I am not able to understand the output of this query as some time it is fetching one row, some time zero rows and some time more that one row.
Can someone please make me understand how oracle is evaluating this query.
Thanks
Ankit
Since dbms_random.value is a nondeterministic PL/SQL function, it will be called once for each row evaluated by the query.
The function might return 4 when evaluating the first row, then it might return 8 on the second row, etc.
To compare each row to a single random number, you can turn the function call into a scalar subquery, e.g.:
select * from tab_name where indexnumber = (select dbms_random.value(1,10) from dual);
Since the subquery is not correlated to the main query, Oracle will execute it only once (for the first row returned from the table) and remember the result for all subsequent rows. In particular, if a suitable index is on indexnumber the query will be able to use it more efficiently since it knows it is probing for a single value.
When you run your original query:
select * from tab_name where indexnumber = dbms_random.value(1,10);
it appears that the call to dbms_random is happening for each record's where clause. In other words, there is a chance that every record in your table might be returned if the random number chosen happen to match the index for every record. If you want to retrieve a single random record, then follow this pattern:
select *
from
( select * from tab_name order by DBMS_RANDOM.VALUE )
where rownum < 2;

PL SQL concatenate 2 resultsets

I need to get the result of concatenating 2 similar querys' resulsets. For some reason had to split the original query in 2, both with their corresponding order by clause. Should be something like (this is an oversimplification of the original queries)
Query1: Select name, age from person where age=10
Resultset1:
Person1, 10
Person3, 10
Query2: Select name, age from person where age=20
Resultset1:
Person2, 20
Person6, 20
The expected result:
Person1, 10
Person3, 10
Person2, 20
Person6, 20
I can not simply use Query1 UNION Query2.
Below the 2 original querys:
(#1)
select cp.CP_ID, cpi.CI_DESCRIPCION, cp.CP_CODIGOJERARQUIZADO, cp.CP_ESGASTO as gasto, cp.CP_CONCEPTOPADRE, LEVEL
from TGCCP_ConceptoPagoIng cp
left join tgcci_ConceptoPagoIngIdioma cpi on cpi.CI_IDCONCEPTOPAGOING = cp.CP_ID and cpi.CI_IDIDIOMA = 1
start with ((CP_CONCEPTOPADRE is null) and (**cp.CP_ESGASTO = 1**))
connect by prior cp.CP_ID = cp.CP_CONCEPTOPADRE
order siblings by CP_CODIGOJERARQUIZADO
(#2)
select cp.CP_ID, cpi.CI_DESCRIPCION, cp.CP_CODIGOJERARQUIZADO, cp.CP_ESGASTO as gasto, cp.CP_CONCEPTOPADRE, LEVEL
from TGCCP_ConceptoPagoIng cp
left join tgcci_ConceptoPagoIngIdioma cpi on cpi.CI_IDCONCEPTOPAGOING = cp.CP_ID and cpi.CI_IDIDIOMA = 1
start with ((CP_CONCEPTOPADRE is null) and (**cp.CP_ESGASTO = 2**))
connect by prior cp.CP_ID = cp.CP_CONCEPTOPADRE
order siblings by CP_CODIGOJERARQUIZADO
I think you want a
select * from ( first query )
UNION ALL
select * from ( second query )
Where first query and second query are the queries from above, so you are turning them into subqueries, thus preserving the order by clauses.
OK, well, I'm not fully certain why you need it this way, but if Oracle won't allow you to do a UNION, or it screws up the ordering when you do, I would try creating a pipelined table function.
An example here
Basically, you'd create a procedure that ran both queries, first one, then the other, putting the results of each into the returned dataset.
It looks like you are looking for a MULTISET UNION. Which can only be used from version 10 upwards.
Regards,
Rob.
You could combine your queries as subqueries and do a single order by on the outer query:
select * from (
<query 1 with its order by>
UNION ALL
<query 2 with its order by>
)
order by column1, column2;
Alternatively, you can implement in PL/SQL the equivalent of a sort merge join with two cursors, but that's unnecessarily complicated.
this solution works perfectly:
select * from ( first query )
UNION ALL
select * from ( second query )
I appreciate everyone that have taken the time to answer.
regards.
For your example:
Select name, age from person where age in (10,20)
or
Select name, age from person where age = 10 or age = 20
However I'm guessing this is not what you need :)

How to put more than 1000 values into an Oracle IN clause [duplicate]

This question already has answers here:
SQL IN Clause 1000 item limit
(5 answers)
Closed 8 years ago.
Is there any way to get around the Oracle 10g limitation of 1000 items in a static IN clause? I have a comma delimited list of many of IDs that I want to use in an IN clause, Sometimes this list can exceed 1000 items, at which point Oracle throws an error. The query is similar to this...
select * from table1 where ID in (1,2,3,4,...,1001,1002,...)
Put the values in a temporary table and then do a select where id in (select id from temptable)
select column_X, ... from my_table
where ('magic', column_X ) in (
('magic', 1),
('magic', 2),
('magic', 3),
('magic', 4),
...
('magic', 99999)
) ...
I am almost sure you can split values across multiple INs using OR:
select * from table1 where ID in (1,2,3,4,...,1000) or
ID in (1001,1002,...,2000)
You may try to use the following form:
select * from table1 where ID in (1,2,3,4,...,1000)
union all
select * from table1 where ID in (1001,1002,...)
Where do you get the list of ids from in the first place? Since they are IDs in your database, did they come from some previous query?
When I have seen this in the past it has been because:-
a reference table is missing and the correct way would be to add the new table, put an attribute on that table and join to it
a list of ids is extracted from the database, and then used in a subsequent SQL statement (perhaps later or on another server or whatever). In this case, the answer is to never extract it from the database. Either store in a temporary table or just write one query.
I think there may be better ways to rework this code that just getting this SQL statement to work. If you provide more details you might get some ideas.
Use ...from table(... :
create or replace type numbertype
as object
(nr number(20,10) )
/
create or replace type number_table
as table of numbertype
/
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select *
from employees , (select /*+ cardinality(tab 10) */ tab.nr from table(p_numbers) tab) tbnrs
where id = tbnrs.nr;
end;
/
This is one of the rare cases where you need a hint, else Oracle will not use the index on column id. One of the advantages of this approach is that Oracle doesn't need to hard parse the query again and again. Using a temporary table is most of the times slower.
edit 1 simplified the procedure (thanks to jimmyorr) + example
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select /*+ cardinality(tab 10) */ emp.*
from employees emp
, table(p_numbers) tab
where tab.nr = id;
end;
/
Example:
set serveroutput on
create table employees ( id number(10),name varchar2(100));
insert into employees values (3,'Raymond');
insert into employees values (4,'Hans');
commit;
declare
l_number number_table := number_table();
l_sys_refcursor sys_refcursor;
l_employee employees%rowtype;
begin
l_number.extend;
l_number(1) := numbertype(3);
l_number.extend;
l_number(2) := numbertype(4);
tableselect(l_number, l_sys_refcursor);
loop
fetch l_sys_refcursor into l_employee;
exit when l_sys_refcursor%notfound;
dbms_output.put_line(l_employee.name);
end loop;
close l_sys_refcursor;
end;
/
This will output:
Raymond
Hans
I wound up here looking for a solution as well.
Depending on the high-end number of items you need to query against, and assuming your items are unique, you could split your query into batches queries of 1000 items, and combine the results on your end instead (pseudocode here):
//remove dupes
items = items.RemoveDuplicates();
//how to break the items into 1000 item batches
batches = new batch list;
batch = new batch;
for (int i = 0; i < items.Count; i++)
{
if (batch.Count == 1000)
{
batches.Add(batch);
batch.Clear()
}
batch.Add(items[i]);
if (i == items.Count - 1)
{
//add the final batch (it has < 1000 items).
batches.Add(batch);
}
}
// now go query the db for each batch
results = new results;
foreach(batch in batches)
{
results.Add(query(batch));
}
This may be a good trade-off in the scenario where you don't typically have over 1000 items - as having over 1000 items would be your "high end" edge-case scenario. For example, in the event that you have 1500 items, two queries of (1000, 500) wouldn't be so bad. This also assumes that each query isn't particularly expensive in of its own right.
This wouldn't be appropriate if your typical number of expected items got to be much larger - say, in the 100000 range - requiring 100 queries. If so, then you should probably look more seriously into using the global temporary tables solution provided above as the most "correct" solution. Furthermore, if your items are not unique, you would need to resolve duplicate results in your batches as well.
Yes, very weird situation for oracle.
if you specify 2000 ids inside the IN clause, it will fail.
this fails:
select ...
where id in (1,2,....2000)
but if you simply put the 2000 ids in another table (temp table for example), it will works
below query:
select ...
where id in (select userId
from temptable_with_2000_ids )
what you can do, actually could split the records into a lot of 1000 records and execute them group by group.
Here is some Perl code that tries to work around the limit by creating an inline view and then selecting from it. The statement text is compressed by using rows of twelve items each instead of selecting each item from DUAL individually, then uncompressed by unioning together all columns. UNION or UNION ALL in decompression should make no difference here as it all goes inside an IN which will impose uniqueness before joining against it anyway, but in the compression, UNION ALL is used to prevent a lot of unnecessary comparing. As the data I'm filtering on are all whole numbers, quoting is not an issue.
#
# generate the innards of an IN expression with more than a thousand items
#
use English '-no_match_vars';
sub big_IN_list{
#_ < 13 and return join ', ',#_;
my $padding_required = (12 - (#_ % 12)) % 12;
# get first dozen and make length of #_ an even multiple of 12
my ($a,$b,$c,$d,$e,$f,$g,$h,$i,$j,$k,$l) = splice #_,0,12, ( ('NULL') x $padding_required );
my #dozens;
local $LIST_SEPARATOR = ', '; # how to join elements within each dozen
while(#_){
push #dozens, "SELECT #{[ splice #_,0,12 ]} FROM DUAL"
};
$LIST_SEPARATOR = "\n union all\n "; # how to join #dozens
return <<"EXP";
WITH t AS (
select $a A, $b B, $c C, $d D, $e E, $f F, $g G, $h H, $i I, $j J, $k K, $l L FROM DUAL
union all
#dozens
)
select A from t union select B from t union select C from t union
select D from t union select E from t union select F from t union
select G from t union select H from t union select I from t union
select J from t union select K from t union select L from t
EXP
}
One would use that like so:
my $bases_list_expr = big_IN_list(list_your_bases());
$dbh->do(<<"UPDATE");
update bases_table set belong_to = 'us'
where id in ($bases_list_expr)
UPDATE
Instead of using IN clause, can you try using JOIN with the other table, which is fetching the id. that way we don't need to worry about limit. just a thought from my side.
Instead of SELECT * FROM table1 WHERE ID IN (1,2,3,4,...,1000);
Use this :
SELECT * FROM table1 WHERE ID IN (SELECT rownum AS ID FROM dual connect BY level <= 1000);
*Note that you need to be sure the ID does not refer any other foreign IDS if this is a dependency. To ensure only existing ids are available then :
SELECT * FROM table1 WHERE ID IN (SELECT distinct(ID) FROM tablewhereidsareavailable);
Cheers

Resources