How to group by one column with single aggregate function but select multiple columns on oracle? - oracle

I want to group by one column with aggregate function on single column while selecting multiple other column.This is easily achieved in MySQL by query below.
SELECT sum(count),store,date,product FROM sales_log_bak where date > "2017-03-01" and date < "2017-04-05" group by date
However,above query doesn't work on oracle Database.What will be equivalent query in oracle to achieve result as given by MySQL on above query?

You can use Analytic Functions:
SELECT sum(sales_count) OVER (PARTITION BY sales_date),
store, sales_date, product
FROM sales_log_bak
where sales_date > DATE '2017-03-01' and sales_date < DATE '2017-04-05';
Note, date and count are reserved words in Oracle, you should not use them for column names.

Related

Count date strings between a range of dates

I have a hive table (table_1). In that table, one of the columns is called 'date'. Values in that column are 'string' type and in the format 'yyyyMMdd', (ex: 20210102). I am trying to get the count(*) of records of a range of dates in that column.
Ex: select count(*) from table_1 where date BETWEEN 20210101 AND 20210301. This will not work now since that column is 'string' type. Need some help querying the DATE version of that column.

Is expression based partitioning supported in hive?

I have a table with a column, can i create a partition based on an expression using that column
I read that IBM's Big SQL technology has this feature.
I also know we can partition in hive by a column but what about an expression?
In this case i am doing a cast..it could be any expression
CREATE TABLE INVENTORY_A (
trans_id int,
product varchar(50),
trans_ts timestamp
)
PARTITIONED BY (
cast(trans_ts as date) AS date_part
)
I expect the records to be partitioned by the date value. So I expect that when a user writes a query like
select * from INVENTORY_A where trans_ts BETWEEN timestamp '2016-06-23 14:00:00.000' AND timestamp '2016-06-23 14:59:59.000'
the query will be smart enough to break the timestamp down by the date and do a filter only on the date
You can use Dynamic partitioning and cast your variables in select query.

How can I convert a date to another format at run time in Oracle?

I have a date string coming from user input in the format of DD/MM/YYYY and I need to match it against a date column in our database in the format of DD-MON-YY.
Example input is 01/01/2015 and example date column in our database:
SELECT MAX(creation_date) FROM orders;
MAX(creation_date)
------------------
06-AUG-15
I need to query in the format:
SELECT * FROM orders WHERE creation_date = 01/01/2015
and somehow have that converted to 01-JAN-15.
Is it possible with some built-in Oracle function?
Use to_date, if the column in the table is in date format
http://www.techonthenet.com/oracle/functions/to_date.php
to_char allows you to specify different formats in a SQL statement.
Example: to_char(sysdate,'DD-MON-YYYY') will display 06-AUG-2015 for today's date.
TO_CHAR
Use to_date to compare your date column to a date string, but be careful in doing so since your date column may include a time component that isn't showing when selecting from your table.
If there is no index on your date column, you can truncate it during the comparison:
SELECT * FROM orders WHERE TRUNC(creation_date) = TO_DATE('01/01/2015','mm/dd/yyyy');
If there is an index on your date column and you still want to use it then use a ranged comparison:
SELECT * FROM orders
WHERE creation_date >= TO_DATE('01/01/2015','mm/dd/yyyy')
and creation_date < TO_DATE('01/01/2015','mm/dd/yyyy')+1;

Column name is masked in oracle indexes

I have a table in oracle db which has a unique index composed of two columns (id and valid_from). The column valid_from is of type timestamps with time zone.
When I query the SYS.USER_IND_COLUMNS to see which columns my table is using as unique index, I can not see the name of the valid_from column but instead I see smth like SYS_NC00027$.
Is there any possibility that I can display the name valid_from rather than SYS_NC00027$. ?
Apparently Oracle creates a function based index for timestamp with time zone columns.
The definition of them can be found in the view ALL_IND_EXPRESSIONS
Something like this should get you started:
select ic.index_name,
ic.column_name,
ie.column_expression
from all_ind_columns ic
left join all_ind_expressions ie
on ie.index_owner = ic.index_owner
and ie.index_name = ic.index_name
and ie.column_position = ic.column_position
where ic.table_name = 'FOO';
Unfortunately column_expression is a (deprecated) LONG column and cannot easily be used in a coalesce() or nvl() function.
Use the below to verify the col info.
select column_name,virtual_column,hidden_column,data_default from user_tab_cols where table_name='EMP';

How to optimize oracle query from partitioned table?

I have a partitioned table by date (by days) and a local index on the fields (including date field). If I make a query:
SELECT * FROM table t WHERE t.fdate = '30.06.2011'
it is fulfilled quickly, but when I make
SELECT * FROM table t WHERE EXTRACT(month from t.fdate) = 6 AND
EXTRACT(year from t.fdate) = 2011
it is fulfilled approximately 200 seconds.
How to optimize this query? May be I need to create local index on EXTRACT(month from date) AND EXTRACT(year from date)?
As you have an index on the date field, you should write your query in a way that this index can be used. This is not possible with the EXTRACT functions since Oracle must go through all data and compute the condition for each row to determine if it matches.
The date index can be used if you have a specific date or a range of dates. In your case, you're looking for a range of dates. So the query could be written as:
SELECT * FROM TABLE T
WHERE T.FDATE BETWEEN TO_DATE('1.6.2011', 'DD.MM.YYYY') AND TO_DATE('30.6.2011', 'DD.MM.YYYY');

Resources