I have two tables, one called STUDENTS and the other CLASSES. I have to select all the students that are from the same class of one student, and this student has his own number id, and through this number id that I have to select everything.
TABLE STUDENTS
nr_rgm
nm_name
nm_father
nm_mother
dt_birth
id_sex
TABLE CLASSES
cd_class
nr_schoolyear
cd_school
cd_degree
nr_series
cd_class
cd_period
So I tried something like that :
SELECT count(*) FROM students, classes WHERE id_sex = 'M' AND
cd_class = (SELECT cd_class FROM classes WHERE nr_rgm = '12150');
But then it points an error, and the error is the follow :
single-row subquery returns more than one row
So, how can I make this work ?
you should use "in" and not "=" when applying subselects.
I think what you really would want to do is to simply join the two tables together rather than issuing a sub select:
SELECT count(*)
FROM students s, classes c
WHERE s.id_sex = 'M' AND c.nr_rgm = '12150' AND s.cd_class = c.cd_class;
This way you just tell the database: Please count all the occurrences where my students.id_sex = 'M' and my classes.nr_rgm = '12150' AND all found studends.cd_class match those of my classes.cd_class.
The reason why your statement above fails is because the ordinary = operation, when not used in a join, will only expect one single value, like you do with s.id_sex='M' while your statement returns multiple values. To cope with that you have to use the IN operator which operates on lists.
However, you can and will achieve the very same with just joining the two tables together, and it will be much more efficient on bigger data sets.
One more note to the example above. If classes.nr_rgm is a field of data type NUMBER, don't use the ' around the value 12150 as it will lead to implicit type conversion. With other words, '12150' is a string and will have to be converted to NUMBER first before doing a comparison.
Related
I have question about "Subquery in Order by clause". The below request returns the error. Is it means that Subquery in Order by clause must be scalar?
select *
from employees
order by (select * from employees where first_name ='Steven' and last_name='King');
Error:
ORA-00913: too many values
00913. 00000 - "too many values"
Yes, it means that if you use a subquery in ORDER BY it must be scalar.
With select * your subquery returns multiple columns and the DBMS would not know which of these to use for the sorting. And if you selected one column only, you would still have to make sure you only select one row of course. (The difference is that Oracle sees the too-many-columns problem immediately, but detect too many rows only when fetching the data.)
This would be allowed:
select * from employees
order by (select birthdate from employees where employee_id = 12345);
This is a scalar query, because it returns only one value (one column, one row). But of course this still makes as little sense as your original query, because the subquery result is independent from the main query, i.e. it returns the same value for every row in the table and thus no sorting takes effect.
A last remark: A subquery in ORDER BY makes very seldomly sense, because that would mean you order by something you don't display. The exception is when looking up a sortkey. E.g.:
select *
from products p
where type = 'shirt' and color = 'blue' and size in ('S', 'M', 'L', 'XL')
order by (select sortkey from sizes s where s.size = p.size);
It means that valid options for ORDER BY clause can be
expression,
position or
column alias
A subquery is neither of these.
I have table[Table 1] having three columns
OrganizationName, FieldName, Acres having data as follows
organizationname fieldname Acres
ABC |F1 |0.96
ABC |F1 |0.96
ABC |F1 |0.64
I want to calculate the sum of Distinct values of Acres
(eg: 0.96+0.64) in DAX.
One of the problems with doing what you want is that many measures rely on filters and not actual table expressions. So, getting a distinct list of values and then filtering the table by those values, just gives you the whole table back.
The iterator functions are handy and operate on table expressions, so try SUMX
TotalDistinctAcreage = SUMX(DISTINCT(Table1[Acres]),[Acres])
This will generate a table that is one column containing only the distinct values for Acres, and then add them up. Note that this is only looking at the Acres column, so if different fields and organizations had the same acreage -- then that acreage would still only be counted once in this sum.
If instead you want to add up the acreage simply on distinct rows, then just make a small change:
TotalAcreageOnDistinctRows = SUMX(DISTINCT(Table1),[Acres])
Hope it helps.
Ok, you added these requirements:
Thank You. :) However, I want to add Distinct values of Acres for a
Particular Fieldname. Is this possible? – Pooja 3 hours ago
The easiest way really is just to go ahead and slice or filter the original measure that I gave you. But if you have to apply the filter context in DAX, you can do it like this:
Measure =
SUMX(
FILTER(
SUMMARIZE( Table1, [FieldName], [Value] )
, [FieldName] = "<put the name of your specific field here"
)
, [Value]
)
I'm new in PL/SQL. I have a matrix stored in the DB as a nested table. Something like,
the matrix is stored as a TABLE of objects (and objects are t1 number, t2 number, ... t100 number)
To to get the matrix it would be select x.* from test t, table(t.matrix) x where... , returning
|T1|T2|T3|...|T100|
I want to create a function that returns the sum over the row to be called using SQL only, something equivalent to
select sum(x.T1),sum(x.T2)...sum(x.T100) from test t, table(t.matrix) x where ...
Something like select bigsum(x.*) from table t, table(t.matrix)
It will be called several times, and I don't want to write the 100 columns every time.
If you want to sum the values from 100 different columns, you're going to have to explicitly list those 100 columns at some point. You can encapsulate that logic for that expression in a view or a function or a pipelined table function or some other construct so that you don't have to repeat the expression many times, you just have to reference the abstraction you've created (i.e. call the function that sums the 100 values).
Although it would likely complicate the problem rather than simplifying it, you could potentially create a solution that uses dynamic SQL to generate the 100 columns names and the expression to add them together if you really, really want to avoid writing out 100 column names. It is highly unlikely, however, that the extra complexity of resorting to dynamic SQL would be beneficial unless there are substantial requirements that you haven't mentioned here that make writing out the column names more than a bit repetitive.
" it'll be called several times, and don't want to write the 100
columns every time"
Why not create a view? Write it once, call it as many times as you like:
create or replace view bigsum
select t.whatever
, sum(x.T1) as sum_t1
, sum(x.T2) as sum_t2
...
, sum(x.T100) as sum_t100
from test t
, table(t.matrix) x
group by t.whatever
You would need to include identifying columns from TEST to allow you to join the view to other tables. This approach would give you something close to want you want:
select *
from bigsum
where whatever = 23
You can reduce the amount of typing further by processing a result set from the data dictionary view USER_TYPE_ATTRS (or a SQL*Plus description) in a decent text editor with a regex search'n'replace.
you can create a function in the below given form depending on your condition and if you require parameter then you can add them while creating function and use them in the condition required
create or replace function bigsum
return number
as
sumall number;
begin
select (sum(x.T1),sum(x.T2)...sum(x.T100)) into sumall
from test t, table(t.matrix) x where .(your condition).. ;
return sumall;
end;/
and call it in the manner
select bigsum from dual;
The easiest way to ask my question is with a Hypothetical Scenario.
Lets say we have 3 tables. Singapore_Prices, Produce_val, and Bosses_unreasonable_demands.
So Prices is a pretty simple table. Item column containing a name, and a Price column containing a number.
Produce_Val is also simple 2 column table. Type column containing what type the produce is (Fruit or veggie) and then Name column (Tomato, pineapple, etc.)
The Bosses_unreasonable_demands only contains one column, Fruit, which CAN contain the names of some fruits.
OK? Ok.
SO, My boss wants me to write a query that returns the prices for every fruit in his unreasonable demands table. Simple enough. BUT, if he doesn't have any entries in his table, he just wants me to output the prices of ALL fruits that exist in produce_val.
Now, assuming I don't know where the DBA who designed this silly hypothetical system lives (and therefore can't get him to fix this), our query would look like this:
if <Logic to determine if Bosses demands are empty>
Then
select Item, Price
from Singapore_Prices
where Item in (select Fruit from Bosses_Unreasonable_demands)
Else
select Item, Price
from Singapore_Prices
where Item in (select Name from Produce_val where type = 'Fruit')
end if;
(Well, we'd select those into a variable, and then output the variable, probably with bulk-collect shenanigans, but that's not important)
Which works. It is entirely functional, and won't be slow, even if we extend it out to 2000 other stores other than Singapore. (Well, no slower than anything else that touches 2000 some tables) BUT, I'm still doing two different select statements that are practically identical. My Comp Sci teacher rolls in their grave every time my fingers hit ctrl-V. I can cut this code in half and only do one select statement. I KNOW I can.
I just have no earthly idea how. I can't use cursors as an in statement, I can't use nested tables or varrays, I can't use cleverly crafted strings, I... I just... I don't know. I don't know how to do this. Is there a way? Does it exist?
Or do I have to copy/paste forever?
Your best bet would be dynamic SQL, because you can't parameterize table or column names.
You will have a SQL query template, have a logic to determine tables and columns that you want to query, then blend them together and execute.
Another aproach, (still a lot of ctrl-v like code) is to use set construction UNION ALL:
select 1st query where boss_condition
union all
select 2nd query where not boss_condition
Try this:
SELECT *
FROM (SELECT s.*, 'BOSS' AS FRUIT_SOURCE
FROM BOSSES_UNREASONABLE_DEMANDS b
INNER JOIN SINGAPORE_FRUIT_LIST s
ON s.ITEM = b.FRUIT
CROSS JOIN (SELECT COUNT(*) AS BOSS_COUNT
FROM BOSSES_UNREASONABLE_DEMANDS)) x
UNION ALL
(SELECT s.*, 'NORMAL' AS FRUIT_SOURCE
FROM PRODUCE_VAL p
INNER JOIN SINGAPORE_FRUIT_LIST s
ON (s.ITEM = p.NAME AND
s.TYPE = 'Fruit')
CROSS JOIN (SELECT COUNT(*) AS BOSS_COUNT
FROM BOSSES_UNREASONABLE_DEMANDS)) n
WHERE (BOSS_COUNT > 0 AND FRUIT_SOURCE = 'BOSS') OR
(BOSS_COUNT = 0 AND FRUIT_SOURCE = 'NORMAL')
Share and enjoy.
I think you can use nested tables. Assume you have a schema-level nested table type FRUIT_NAME_LIST (defined using CREATE TYPE).
SELECT fruit
BULK COLLECT INTO my_fruit_name_list
FROM bosses_unreasonable_demands
;
IF my_fruit_name_list.count = 0 THEN
SELECT name
BULK COLLECT INTO my_fruit_name_list
FROM produce_val
WHERE type='Fruit'
;
END IF;
SELECT item, price
FROM singapore_prices
WHERE item MEMBER OF my_fruit_name_list
;
(or, WHERE item IN (SELECT column_value FROM TABLE(CAST(my_fruit_name_list AS fruit_name_list)) if you like that better)
I am working in C#.Net and Oracle. i am passing a string to a query. i had used this code for concating all the item id's
List<string> listRetID = new List<string>();
foreach (DataRow row in dtNew.Rows)
{
listRetID.Add(row[3].ToString());
}
This concatination goes above 10,000. so i am getting the error message like this..
ORA-01795: maximum number of expressions in a list is 1000
How to fix this..
The documentation states:
A comma-delimited list of expressions can contain no more than 1000
expressions. A comma-delimited list of sets of expressions can contain
any number of sets, but each set can contain no more than 1000
expressions.
Presumably you're using this string as the contents of in IN (...) restriction, in which case there isn't really anything you can do - this just won't work. A common way to work around this is to generate a dummy table as a subquery or common table expression (CTE) and joining to that, but I'm not sure how you'd translate your List - possibly similar to whatever you're doing with your IN clause. You'd want to end up with your query looking something like:
with tmp_tab as (
select <val1 from list> as val from dual
union all select <val2 from list from dual
union all select <val3 from list from dual
...
)
select <something>
from <your table> yt
join tmp_tab tt on yt.<field> = tt.val
But that requires generating the entire (huge) query including the CTE each time you run it, and there's no opportunity to use bind variables.
You might find something like this approach more palatable.
You can have 10 lists of 1000 items instead of 1 list of 10000 items.
WHERE some_column IN (1,2,...,1000)
OR some_column IN (1001,1002,...2000) -- etc.
Not a C# guy but I would just split the list listRetID in multiple lists or create a list of lists
Then loop through that list of lists and perform the query on each element of the list.
What is the intent of your query?
It looks like you are selecting rows that have some column equal to the 3rd column of one of the records of some query.
The correct way of doing this is either an SQL join or a subquery. There is absolutely no need to bring this into C# code. For example, using a subquery you can write something like this:
SELECT *
FROM atable
WHERE afield IN (
SELECT field3
FROM someothertable)