I have a rather large table (34 GB, 77M rows) which contains payment information. The table is partitioned by payment date because users usually care about small ranges of dates so the partition pruning really helps queries to return quickly.
The problem is that I have a user who wants to find out all payments that have ever been made to certain people.
Names are stored in columns NAME1 and NAME2, which are both VARCHAR2(40 Byte) and hold free-form full name data. For example, John Q Public could appear in either column as:
John Q Public
John Public
Public, John Q
or even embedded in the middle of the field, like "Estate of John Public"
Right now, the way the query is set up is to look for
NAME1||NAME2 LIKE '%JOHN%PUBLIC%' OR NAME1||NAME2 LIKE '%PUBLIC%JOHN%' and as you can imagine, the performance sucks.
Is this a job for Oracle Text? How else could I better index the atomic bits of the columns so that the user can search by first/last name?
Database Version: Oracle 12c (12.1.0.2.0)
Create a multi-column index on both names and modify your query to use an INDEX FAST FULL SCAN operation.
Traversing a b-tree index is a great way to quickly find a small amount of data. Unfortunately the leading wildcards ruin that access path for your query. However, Oracle has multiple ways of reading data from an index. The INDEX FAST FULL SCAN operation simply reads all of the index blocks in no particular order, as if the index was a skinny table. Since the average row length of your table is 442 bytes, and the two columns use at most 80 bytes, reading all the names in the index may be much faster than scanning the entire table.
But the index alone probably isn't enough. You need to change the concatenation into multiple OR expressions.
Sample schema:
--Create payment table and index on name columns.
create table payment
(
id number,
paydate date,
other_data varchar2(400),
name1 varchar2(40),
name2 varchar2(40)
);
create index payment_idx on payment(name1, name2);
--Insert 100K sample rows.
insert into payment
select level, sysdate + level, lpad('A', 400, 'A'), level, level
from dual
connect by level <= 100000;
--Insert two rows with relevant values.
insert into payment values(0, sysdate, 'other data', 'B JOHN B PUBLIC B', 'asdf');
insert into payment values(0, sysdate, 'other data', 'asdf', 'C JOHN C PUBLIC C');
commit;
--Gather stats to help optimizer pick the right plan.
begin
dbms_stats.gather_table_stats(user, 'payment');
end;
/
Original expression uses a full table scan:
explain plan for
select name1, name2
from payment
where NAME1||NAME2 LIKE '%JOHN%PUBLIC%' OR NAME1||NAME2 LIKE '%PUBLIC%JOHN%';
select * from table(dbms_xplan.display);
Plan hash value: 684176532
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 9750 | 4056K| 1714 (1)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| PAYMENT | 9750 | 4056K| 1714 (1)| 00:00:01 |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("NAME1"||"NAME2" LIKE '%JOHN%PUBLIC%' OR "NAME1"||"NAME2"
LIKE '%PUBLIC%JOHN%')
New expression uses a faster INDEX FAST FULL SCAN operation:
explain plan for
select name1, name2
from payment
where
NAME1 LIKE '%JOHN%PUBLIC%' OR
NAME1 LIKE '%PUBLIC%JOHN%' OR
NAME2 LIKE '%JOHN%PUBLIC%' OR
NAME2 LIKE '%PUBLIC%JOHN%';
select * from table(dbms_xplan.display);
Plan hash value: 1655289165
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 18550 | 217K| 152 (3)| 00:00:01 |
|* 1 | INDEX FAST FULL SCAN| PAYMENT_IDX | 18550 | 217K| 152 (3)| 00:00:01 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("NAME1" LIKE '%JOHN%PUBLIC%' AND "NAME1" IS NOT NULL AND
"NAME1" IS NOT NULL OR "NAME1" LIKE '%PUBLIC%JOHN%' AND "NAME1" IS NOT NULL
AND "NAME1" IS NOT NULL OR "NAME2" LIKE '%JOHN%PUBLIC%' AND "NAME2" IS NOT
NULL AND "NAME2" IS NOT NULL OR "NAME2" LIKE '%PUBLIC%JOHN%' AND "NAME2" IS
NOT NULL AND "NAME2" IS NOT NULL)
This solution should definitely be faster than a full table scan. How much faster depends on the average name size and the name being searched. And depending on the query you may want to add additional columns to keep all the relevant data in the index.
Oracle Text is also a good option, but that feature feels a little "weird" in my opinion. If you're not already using text indexes you might want to stick with normal indexes to simplify administrative tasks.
I've read through similar questions and they don't seem to quite fit my issue or they're in a different environment.
I'm working in MS-Access 2016.
I have a customer complaints report which has fields: year, month, count([complaint #]), complaint_desc.
(complaint # is the literal ID number we assign to each complaint entered into the table)
I grouped the report by year and month and then grouped by complaint_desc and for each desc did a count of complaint number, and then did a count of complaint # to add up total complaints for the month and stuck it in the month footer which gives a result of something like this:
2020 03 <= (this is the month group header)
complaint desc | count of complaints/desc
---------------------------------------------
electrical | 2 {This section is
cosmetic | 6 {in the Complaint_desc
mechanical | 1 {group footer
---------------------------------------------
9 <= (this is month group footer)
repeating the group for each month
This is all good. What I want to do is to sort the records within the complaint desc group in descending order of count(complaint#) so that it looks like:
2020 03
complaint desc | count of complaints/category
---------------------------------------------
cosmetic | 6
electrical | 2
mechanical | 1
---------------------------------------------
9
However nothing I do seems to work, the desc group's built-in sort "a on top" overrides sorting in the query. adding a sort by complaint# is ignored also. I tried to add a sort by count(complaint#) and access told me I can't have an aggregate function in an order by (but I think it would have been overridden anyway). I also tried to group by count(complaint#) also shot down as aggregate in a group by. Tried moving complaint_desc and count(complaint#) to the complaint# group header and it screwed up the total count in the month footer and also split up the complaint desc's defeating it's original purpose...
I really didn't think this change was going to be a big deal, but a solution has evaded me for a while now. I've read similar questions and tried to follow examples but they didn't lead to my intended result.
Any Idea?
I figured it out! Thank you to #UnhandledException who got me thinking on the right track.
So here's what I did:
The original query the report was based on contained the following:
Design mode:
Field | Year | Month | Complaint_Desc | Complaint# |
Total | Group By | Group By | Group By | Group By |
Sort | | | | |
or in SQL:
SELECT Year, Month, [tbl Failure Mode].[Code description], [Complaint Data Table].[Complaint #]
FROM [tbl Failure Mode] RIGHT JOIN [Complaint Data Table] ON [tbl Failure Mode].[ID code] = [Complaint Data Table].[Failure Mode]
GROUP BY Year, Month, [tbl Failure Mode].[Code description], [Complaint Data Table].[Complaint #];
And then I was using the report's group and sort functions to make it show how I wanted except for the hiccup I mentioned.
I made another query based upon that query:
Design mode:
Field | Year | Month | Complaint_Desc | Complaint# |
Total | Group By | Group By | Group By | Count |
Sort | Descending | Descending | | Descending |
or in SQL:
SELECT [qry FailureMode].Year, [qry FailureMode].Month, [qry FailureMode].[Complaint_description], Count([qry FailureMode].[Complaint #]) AS [CountOfComplaint #], [qry FailureMode].Complaint
FROM [qry FailureMode]
GROUP BY [qry FailureMode].Year, [qry FailureMode].Month, [qry FailureMode].[Code description], [qry FailureMode].Complaint
ORDER BY [qry FailureMode].Year DESC , [qry FailureMode].Month DESC , Count([qry FailureMode].[Complaint #]) DESC;
Then I changed the report structure:
I eliminated the Complaint_Desc group, moved complaint_desc and CountofComplaint# (which is now not a function but it's own calculated field from my new query) to the DETAIL section of the report. Then I deleted my 2nd count(complaint#) that was in the month footer as a total for each month and replaced it with the "AccessTotalsCountOfComplaint #" which is =Sum([CountOfComplaint #]) which I had access auto-create by right-clicking on the CountofComplaint_Desc in details scrolling to "Total" and clicking on "Sum". (I deleted the extra AccessTotalsCountOfComplaint#'s that were outside of the Month Group Footer that I needed it for...)
Et Voila
I hope this helps someone else, and thank you again to Unhandled Exception who pointed me in the right direction.
In Hiveql, what is the most elegant and performatic way of calculating an average value when there are 'gaps' in the data, with implicit repeated values between them? i.e. Considering a table with the following data:
+----------+----------+----------+
| Employee | Date | Balance |
+----------+----------+----------+
| John | 20181029 | 1800.2 |
| John | 20181105 | 2937.74 |
| John | 20181106 | 3000 |
| John | 20181110 | 1500 |
| John | 20181119 | -755.5 |
| John | 20181120 | -800 |
| John | 20181121 | 1200 |
| John | 20181122 | -400 |
| John | 20181123 | -900 |
| John | 20181202 | -1300 |
+----------+----------+----------+
If I try to calculate a simple average of the november rows, it will return ~722.78, but the average should take into account the days that are not shown have the same balance as the previous register. In the above data, John had 1800.2 between 20181101 and 20181104, for example.
Assuming that the table always have exactly one row for each date/balance and given that I cannot change how this data is stored (and probably shouldn't since it would be a waste of storage to write rows for days with unchanged balances), I've been tinkering with getting the average from a select with subqueries for all the days in the queried month, returning a NULL for the absent days, and then using case to get the balance from the previous available date in reverse order. All of this just to avoid writing temporary tables.
Step 1: Original Data
The 1st step is to recreate a table with the original data. Let's say the original table is called daily_employee_balance.
daily_employee_balance
use default;
drop table if exists daily_employee_balance;
create table if not exists daily_employee_balance (
employee_id string,
employee string,
iso_date date,
balance double
);
Insert Sample Data in original table daily_employee_balance
insert into table daily_employee_balance values
('103','John','2018-10-25',1800.2),
('103','John','2018-10-29',1125.7),
('103','John','2018-11-05',2937.74),
('103','John','2018-11-06',3000),
('103','John','2018-11-10',1500),
('103','John','2018-11-19',-755.5),
('103','John','2018-11-20',-800),
('103','John','2018-11-21',1200),
('103','John','2018-11-22',-400),
('103','John','2018-11-23',-900),
('103','John','2018-12-02',-1300);
Step 2: Dimension Table
You will need a dimension table where you will have a calendar (table with all the possible dates), call it dimension_date. This is a normal industry standard to have a calendar table, you could probably download this sample data over the internet.
use default;
drop table if exists dimension_date;
create external table dimension_date(
date_id int,
iso_date string,
year string,
month string,
month_desc string,
end_of_month_flg string
);
Insert some sample data for entire month of Nov 2018:
insert into table dimension_date values
(6880,'2018-11-01','2018','2018-11','November','N'),
(6881,'2018-11-02','2018','2018-11','November','N'),
(6882,'2018-11-03','2018','2018-11','November','N'),
(6883,'2018-11-04','2018','2018-11','November','N'),
(6884,'2018-11-05','2018','2018-11','November','N'),
(6885,'2018-11-06','2018','2018-11','November','N'),
(6886,'2018-11-07','2018','2018-11','November','N'),
(6887,'2018-11-08','2018','2018-11','November','N'),
(6888,'2018-11-09','2018','2018-11','November','N'),
(6889,'2018-11-10','2018','2018-11','November','N'),
(6890,'2018-11-11','2018','2018-11','November','N'),
(6891,'2018-11-12','2018','2018-11','November','N'),
(6892,'2018-11-13','2018','2018-11','November','N'),
(6893,'2018-11-14','2018','2018-11','November','N'),
(6894,'2018-11-15','2018','2018-11','November','N'),
(6895,'2018-11-16','2018','2018-11','November','N'),
(6896,'2018-11-17','2018','2018-11','November','N'),
(6897,'2018-11-18','2018','2018-11','November','N'),
(6898,'2018-11-19','2018','2018-11','November','N'),
(6899,'2018-11-20','2018','2018-11','November','N'),
(6900,'2018-11-21','2018','2018-11','November','N'),
(6901,'2018-11-22','2018','2018-11','November','N'),
(6902,'2018-11-23','2018','2018-11','November','N'),
(6903,'2018-11-24','2018','2018-11','November','N'),
(6904,'2018-11-25','2018','2018-11','November','N'),
(6905,'2018-11-26','2018','2018-11','November','N'),
(6906,'2018-11-27','2018','2018-11','November','N'),
(6907,'2018-11-28','2018','2018-11','November','N'),
(6908,'2018-11-29','2018','2018-11','November','N'),
(6909,'2018-11-30','2018','2018-11','November','Y');
Step 3: Fact Table
Create a fact table from the original table. In normal practice, you ingest the data to hdfs/hive then process the raw data and create a table with historical data where you keep inserting in increment manner. You can look more into data warehousing to get the proper definition but I call this a fact table - f_employee_balance.
This will re-create the original table with missing dates and populate the missing balance with earlier known balance.
--inner query to get all the possible dates
--outer self join query will populate the missing dates and balance
drop table if exists f_employee_balance;
create table f_employee_balance
stored as orc tblproperties ("orc.compress"="SNAPPY") as
select q1.employee_id, q1.iso_date,
nvl(last_value(r.balance, true) --initial dates to be populated with 0 balance
over (partition by q1.employee_id order by q1.iso_date rows between unbounded preceding and current row),0) as balance,
month, year from (
select distinct
r.employee_id,
d.iso_date as iso_date,
d.month, d.year
from daily_employee_balance r, dimension_date d )q1
left outer join daily_employee_balance r on
(q1.employee_id = r.employee_id) and (q1.iso_date = r.iso_date);
Step 4: Analytics
The query below will give you the true average for by month:
select employee_id, monthly_avg, month, year from (
select employee_id,
row_number() over (partition by employee_id,year,month) as row_num,
avg(balance) over (partition by employee_id,year,month) as monthly_avg, month, year from
f_employee_balance)q1
where row_num = 1
order by year, month;
Step 5: Conclusion
You could have just combined step 3 and 4 together; this would save you from creating extra table. When you are in the big data world, you don't worry much about wasting extra disk space or development time. You can easily add another disk or node and automate the process using workflows. For more information, please look into data warehousing concept and hive analytical queries.