I have a table, mealdb, which has three fields viz. "userid", "timeofday", "daydate"(all varchar2). timeofday can have two values, "noon" or "afternoon". Now I want to calculate the total number of afternoons against a given userid. I tried like
select sum(timeofday)
from mealdb
where timeofday='afternoon' where userid='1200';
But it is giving an error in Oracle 10g.
How do I fix this problem?
Only one WHERE keyword is allowed.
Further conditions are added using boolean operators like AND, OR
select count(*)
from mealdb
where timeofday='afternoon'
and userid='1200';
select count(1)
from mealdb
where timeofday='afternoon' and userid='1200';
Try select count(timeofday) from mealdb where timeofday='afternoon' and userid='1200' You need to use Count not Sum and use and where your second where is.
Related
I'm running Apex 19.2 and I would like to create a classical or interactive report based on dynamic query.
The query I'm using is not known at design time. It depends on an page item value.
-- So I have a function that generates the SQL as follows
GetSQLQuery(:P1_MyItem);
This function may return something like
select Field1 from Table1
or
Select field1,field2 from Table1 inner join Table2 on ...
So it's not a sql query always with the same number of columns. It's completely variable.
I tried using PL/SQL function Body returning SQL Query but it seems like Apex needs to parse the query at design time.
Has anyone an idea how to solve that please ?
Cheers,
Thanks.
Enable the Use Generic Column Names option, as Koen said.
Then set Generic Column Count to the upper bound of the number of columns the query might return.
If you need dynamic column headers too, go to the region attributes and set Type (under Heading) to the appropriate value. PL/SQL Function Body is the most flexible and powerful option, but it's also the most work. Just make sure you return the correct number of headings as per the query.
Is it necessary to use GROUP BY while you use an aggregate function with column in Oracle?
In MySQL, if I don't use it's working fine, but in Oracle, it gives me an error.
It's necessary if you select at least one column without an aggregate function.
So, this will work:
select sum(col_1), avg(col_2) from table_1;
while this wont:
select sum(col_1), avg(col_2), col_3 from table_1;
You should always use GROUP BY when using aggregate functions. Not using GROUP BY is a non-standard SQL extension allowed by MySQL.
RANT
IMHO, this extension is brain-dead, outright dangerous and should never be used at all because MySQL returns values for a random row for the non-aggregated columns.
END_OF_RANT
Maybe, this question is a little stupid, but I'm confused.
How to group records by specified column ? :)
Item.group(:category_id)
does't works...
It says:
ActiveRecord::StatementInvalid: PGError: ERROR: column "items.id" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT "items".* FROM "items" GROUP BY category_id
What kind of aggregate function should i use?
Please, could you provide a simple example.
You will have to define, how to group values that share the same category_id. Concatenate them? Calculate a sum?
To create comma-separated lists of values your statement could look like this:
SELECT category_id
,string_agg(col1, ', ') AS col1_list
,string_agg(col2, ', ') AS col2_list
FROM items
GROUP BY category_id
You need Postgres 9.0 or later for string_agg(col1, ', ').
In older versions you can substitute with array_to_string(array_agg(col1), ', '). More aggregate functions here.
To aggregate values in PostgreSQL is the clearly superior approach as opposed to aggregating values in the client. Postgres is very fast at this and it reduces (network) traffic.
You can use sum, avg, count or any other aggregate function. More on this topic you can find here.
But it seems that you don't really need to use SQL grouping.
Try to fetch all records and then use Array#collect function to group Items by category_id
Grouping in SQL means that the server groups one or more records from the database table into one resulting row. So, if you for example group by category_id, you might have several records matching the given category, so you can't expect the database to return all columns from the table (that's what SELECT * actually does).
Instead, when you use GROUP BY, you can SELECT only:
columns you have grouped by, and/or
aggregate functions which are performed on all the records belonging to a resulting group
Depending on what you exactly need, modify your .select accordingly.
I am new to Oracle and working with a fairly large database. I would like to perform a query that will select the desired columns, order by a certain column and also limit the results. According to everything I have read, the below query should be working but it is returning "ORA-00918: column ambiguously defined":
SELECT * FROM(SELECT * FROM EAI.EAI_EVENT_LOG e,
EAI.EAI_EVENT_LOG_MESSAGE e1 WHERE e.SOURCE_URL LIKE '%.XML'
ORDER BY e.REQUEST_DATE_TIME DESC) WHERE ROWNUM <= 20
Any suggestions would be greatly appreciated :D
The error message means your result set contains two columns with the same name. Each column in a query's projection needs to have a unique name. Presumably you have a column (or columns) with the same name in both EAI_EVENT_LOG and EAI_EVENT_LOG_MESSAGE.
You also want to join on that column. At the moment you are generating a cross join between the two tables. In other words, if you have a hundred records in EAI_EVENT_LOG and two hundred records EAI_EVENT_LOG_MESSAGE your result set will be twenty thousand records (without the rownum). This is probably your intention.
"By switching to innerjoin, will that eliminate the error with the
current code?"
No, you'll still need to handle having two columns with the same name. Basically this comes from using SELECT * on two multiple tables. SELECT * is bad practice. It's convenient but it is always better to specify the exact columns you want in the query's projection. That way you can include (say) e.TRANSACTION_ID and exclude e1.TRANSACTION_ID, and avoid the ORA-00918 exception.
Maybe you have some columns in both EAI_EVENT_LOG and EAI_EVENT_LOG_MESSAGE tables having identical names? Instead of SELECT * list all columns you want to select.
Other problem I see is that you are selecting from two tables but you're not joining them in the WHERE clause hence the result set will be the cross product of those two table.
You need to stop using SQL '89 implicit join syntax.
Not because it doesn't work, but because it is evil.
Right now you have a cross join which in 99,9% of the cases is not what you want.
Also every sub-select needs to have it's own alias.
SELECT * FROM
(SELECT e.*, e1.* FROM EAI.EAI_EVENT_LOG e
INNER JOIN EAI.EAI_EVENT_LOG_MESSAGE e1 on (......)
WHERE e.SOURCE_URL LIKE '%.XML'
ORDER BY e.REQUEST_DATE_TIME DESC) s WHERE ROWNUM <= 20
Please specify a join criterion on the dotted line.
Normally you do a join on a keyfield e.g. ON (e.id = e1.event_id)
It's bad idea to use select *, it's better to specify exactly which fields you want:
SELECT e.field1 as customer_id
,e.field2 as customer_name
.....
I am getting passed comma separated values to a stored procedure in oracle. I want to treat these values as a table so that I can use them in a query like:
select * from tabl_a where column_b in (<csv values passed in>)
What is the best way to do this in 11g?
Right now we are looping through these one by one and inserting them into a gtt which I think is inefficient.
Any pointers?
This solves exactly same problem
Ask Tom
Oracle does not come with a built-in tokenizer. But it is possible to roll our own, using SQL Types and PL/SQL. I have posted a sample solution in this other SO thread.
That would enable a solution like this:
select * from tabl_a
where column_b in ( select *
from table (str_to_number_tokens (<csv values passed in>)))
/
In 11g you can use the "occurrence" parameter of REGEXP_SUBSTR to select the values directly in SQL:
select regexp_substr(<csv values passed in>,'[^,]+',1,level) val
from dual
connect by level < regexp_count(<csv values passed in>,',')+2;
But since regexp_substr is somewhat expensive I am not sure if it is the most effective in terms of being the fastest.