Using query hints to use a index in an inner table - oracle

I have a query which uses the view a as follows and the query is extremely slow.
select *
from a
where a.id = 1 and a.name = 'Ann';
The view a is made up another four views b,c,d,e.
select b.id, c.name, c.age, e.town
from b,c,d,e
where c.name = b.name AND c.id = d.id AND d.name = e.name;
I have created an index on the table of c named c_test and I need to use it when executing the first query.
Is this possible?

Are you really using this deprecated 1980s join syntax? You shouldn't. Use proper explicit joins (INNER JOIN in your case).
You are joining the two tables C and D on their IDs. That should mean they are 1:1 related. If not, "ID" is a misnomer, because an ID is supposed to identify a row.
Now let's look at the access route: You have the ID from table B and the name from tables B and C. We can tell from the column name that b.id is unique and Oracle guarantees this with a unique index, if the database is set up properly.
This means the DBMS will look for the B row with ID 1, find it instantly in the index, find the row instantly in the table, see the name and see whether it matches 'Ann'.
The only thing that can be slow hence is joining C, D, and E. Joining on unique IDs is extremely fast. Joining on (non-unigue?) names is only fast, if you provide indexes on the names. I'd recommend the following indexes accordingly:
create index idx_c on c (name);
create index idx_e on e (name);
To get this faster still, use covering indexes instead:
create index idx_b on b (id, name);
create index idx_c on c (name, id, age);
create index idx_d on d (id, name);
create index idx_e on e (name, town);

Related

Consecutive JOIN and aliases: order of execution

I am trying to use FULLTEXT search as a preliminary filter before fetching data from another table. Consecutive JOINs follow to further refine the query and to mix-and-match rows (in reality there are up to 6 JOINs of the main table).
The first "filter" returns the IDs of the rows that are useful, so after joining I have a subset to continue with. My issue is performance, however, and my lack of understanding of how the SQL query is executed in SQLite.
SELECT *
FROM mytbl AS t1
JOIN
(SELECT someid
FROM myftstbl
WHERE
myftstbl MATCH 'MATCHME') AS prior
ON
t1.someid = prior.someid
AND t1.othercol = 'somevalue'
JOIN mytbl AS t2
ON
t2.someid = prior.someid
/* Or is this faster? t2.someid = t1.someid */
My thought process for the query above is that first, we retrieve the matched IDs from the myftstbl table and use those to JOIN on the main table t1 to get a sub-selection. Then we again JOIN a duplicate of the main table as t2. The part that I am unsure of is which approach would be faster: using the IDs from the matches, or from t2?
In other words: when I refer to t1.someid inside the second JOIN, does that contain only the someids after the first JOIN (so only those at the intersection of prior and those for which t1.othercol = 'somevalue) OR does it contain all the original someids of the whole original table?
You can assume that all columns are indexed. In fact, when I use one or the other approach, I find with EXPLAIN QUERY PLAN that different indices are being used for each query. So there must be a difference between the two.
The query should be simplified to
SELECT *
FROM mytbl AS t1
JOIN myftstbl USING (someid) -- or ON t1.someid = myftstbl.someid
JOIN mytbl AS t2 USING (someid) -- or ON t1.someid = t2.someid
WHERE myftstbl.{???} MATCH 'MATCHME' -- replace {???} with correct column name
AND t1.othercol = 'somevalue'
PS. The query logic is not clear for me, so it is saved as-is.

Oracle how to use distinct command in join

Im am trying to get a list of distinct values from one column whilst getting the key column data by inner join on another table as below..table a and table b hold key column of client.
Table b holds column product which has a range of values against a range of client numbers
Table a holds only client numbet
Table b holds client number and product
Client product
1. A
1. B
2. B
3. C
I want to find the list of distinct product values where the client is in table a and table b
Any suggestions welcome
find the list of distinct product values where the client is in table
a and table b
As you will notice below "distinct" isn't applied in joins
SELECT DISTINCT
b.Product
FROM TABLEA a
INNER JOIN TABLEB b ON a.Client = b.Client
;
The inner join ensures that client exists in both A and B and then the "select distinct" removes any repetition in the list of products.
SELECT
b.Product
, COUNT(*) AS countof
FROM TABLEA a
INNER JOIN TABLEB b ON a.Client = b.Client
GROUP BY
b.Product
;
An alternative, which also makes a distinct list of products where clients are in A and B is to use group by, with the added bonus that this way you can do some extra stuff like counting how often a product is referenced.
try it at SQLFiddle.com

How to effeciently select data from two tables?

I have two tables: A, B.
A has prisoner_id and prisoner_name columns.
B has all other info about prisoners included prisoner_name column.
First I select all of the data that I need from B:
WITH prisoner_datas AS
(SELECT prisoner_name, ... FROM B WHERE ...)
Then I want to know all of the id of my prisoner_datas. To do this I need to combine information by prisoner_name column, because it's common for both tables
I did the following
SELECT A.prisoner_id, prisoner_datas.prisoner_name, prisoner_datas. ...,
FROM A, prisoner_datas
WHERE A.prisoner_name = prisoner_datas.prisoner_name
But it works very slow. How can I improve performance?
Add an index on the prisoner_name join column in the B table. Then the following join should have some performance improvement:
SELECT
A.prisoner_id,
B.prisoner_name,
B.prisoner_datas.id -- and other columns if needed
FROM A
INNER JOIN B
ON A.prisoner_name = B.prisoner_name
Note here that I used an explicit join syntax here. It isn't required, and the query plan might not change, but it makes the query easier to read. I don't think the CTE will change much, but the lack of an index on the join column should be important here.

Worse query plan with a JOIN after ANALYZE

I see that running ANALYZE results in significantly poor performance on a particular JOIN I'm making between two tables.
Suppose the following schema:
CREATE TABLE a ( id INTEGER PRIMARY KEY, name TEXT );
CREATE TABLE b ( a NOT NULL REFERENCES a, value INTEGER, PRIMARY KEY(a, b) );
CREATE VIEW ab AS SELECT a.name, b.text, MAX(b.value)
FROM a
JOIN b ON b.a = a.id;
GROUP BY a.id
ORDER BY a.name
Table a is approximately 10K rows, table b is approximately 48K rows (~5 rows per row in table a).
Before ANALYZE
Now when I run the following query:
SELECT * FROM ab;
The query plan looks as follows:
1|0|0|SCAN TABLE b
1|1|1|SEARCH TABLE a USING INTEGER PRIMARY KEY (rowid=?)
This is a good plan, b is larger and I want it to be in the outer loop, making use of the index in table a. It finishes well within a second.
After ANALYZE
When I execute the same query again, the query plan results in two table scans:
1|0|1|SCAN TABLE a
1|1|0|SCAN TABLE b
This is far for optimal. For some reason the query planner thinks that an outer loop of 10K rows and an inner loop of 48K rows is a better fit. This takes about 1.5 minute to complete.
Should I adapt the index in table b to make it work after ANALYZE? Anything else to change to the indexing/schema?
I just try to understand the problem here. I worked around it using a CROSS JOIN, but that feels dirty and I don't really understand why the planner would go with a plan that is orders of magnitude slower than the un-analyzed plan. It seems to be related to GROUP BY, since the query planner puts table b in the outer loop without it (but that renders the query useless for what I want).
Accidentally found the answer by adjusting the GROUP BY clause in the view definition. Instead of joining on a.id, I group on b.a instead, although they have the same values.
CREATE VIEW ab AS SELECT a.name, b.text, MAX(b.value)
FROM a
JOIN b ON b.a = a.id;
GROUP BY b.a -- <== changed this from a.id to b.a
ORDER BY a.name
I'm still not entirely sure what the difference is, since it groups the same data.

How do I write a LINQ query to combine multiple rows into one row?

I have one table, 'a', with id and timestamp. Another table, 'b', has N multiple rows referring to id, and each row has 'type', and "some other data".
I want a LINQ query to produce a single row with id, timestamp, and "some other data" x N. Like this:
1 | 4671 | 46.5 | 56.5
where 46.5 is from one row of 'b', and 56.5 is from another row; both with the same id.
I have a working query in SQLite, but I am new to LINQ. I dont know where to start - I don't think this is a JOIN at all.
SELECT
a.id as id,
a.seconds,
COALESCE(
(SELECT b.some_data FROM
b WHERE
b.id=a.id AND b.type=1), '') AS 'data_one',
COALESCE(
(SELECT b.some_data FROM
b WHERE
b.id=a.id AND b.type=2), '') AS 'data_two'
FROM a first
WHERE first.id=1
GROUP BY first.ID
you didn't mention if you are using Linq to sql or linq to entities. However following query should get u there
(from x in a
join y in b on x.id equals y.id
select new{x.id, x.seconds, y.some_data, y.type}).GroupBy(x=>new{x.id,x.seconds}).
Select(x=>new{
id = x.key.id,
seconds = x.Key.seconds,
data_one = x.Where(z=>z.type == 1).Select(g=>g.some_data).FirstOrDefault(),
data_two = x.Where(z=>z.type == 2).Select(g=>g.some_data).FirstOrDefault()
});
Obviously, you have to prefix your table names with datacontext or Objectcontext depending upon the underlying provider.
What you want to do is similar to pivoting, see Is it possible to Pivot data using LINQ?. The difference here is that you don't really need to aggregate (like a standard pivot), so you'll need to use Max or some similar method that can simulate selecting a single varchar field.

Resources