What are the deciding factors for the order of Tables when joining amongst them? - performance

I know that when joining across multiple tables, performance is dependent upon the order in which they are joined. What factors should I consider when joining tables?

Most modern RDBM's optimize the query based upon which tables are joined, the indexes used, table statistics, etc. They rarely, if ever, differ in their final execution plan based upon the order of the joins in the query.
SQL is designed to be declarative; you specify what you want, not (in most cases) how to get it. While there are things like index hints that can allow you to direct the optimizer to use or avoid specific indexes, by and large you can leave that work to the engine and be about the business of writing your queries.
In the end, running different versions of your queries within SQL Server Management Studio and viewing the actual execution plans is the only way to tell if order can truly make a difference.

As far as I know, the join order has no effect on query performance. The query engine will parse the query and execute it in the way it believes is the most efficient. If you want, try writing the query using different join orders and look at the execution plan. They should be the same.
See this article: http://sql-4-life.blogspot.com/2009/03/order-of-inner-joins.html

Related

Hive union all efficiency and best practice

I have a hive efficiency question. I have 2 massive queries that need to be filtered, joined with mapping tables, and unioned. All the joins are identical for both tables. Would it be more efficient to union them before applying the joins to the combined table or to apply the joins to each massive query individually then union the results? Does it make a difference?
I tried the second way and the query ran for 24 hours before I killed it. I feel like I did everything I could to optimize it except potentially rearrange the union statement. On the one hand, I feel like it should not matter because the number or rows being joined by the mapping table is the same and since everything is palatalized, it should take roughly the same amount of time. On the other hand, maybe by doing the union first, it should guarantee that the two big queries are given full system resources before the joins are run. Then again, that might mean that there are only 2 jobs running at a time so the system is not being fully used or something.
I simply do not know enough about how hive and it's multi-threading works. Anybody have any ideas?
There is no such best practice. Both approaches are applicable. Subqueries in UNION ALL are running as parallel jobs. So join before union will work as parallel tasks with smaller datasets, tez can optimize execution and common joined tables will be read only once in single mapper stage for each table.
Also you can avoid joins for some subqueries for example if their keys are not applicable for join.
Join with union-ed bigger dataset also may work with very high parallelism depending on your settings (bytes per reducer for example), optimizer also may rewrite query plan. So I suggest you to check both methods, measure speed, study plan and check if you can change something. Change, measure, study plan... repeat
Few more suggestions:
Try to limit datasets before joining them. If your join multiplies rows then analytics and aggregation may work slower on bigger datasets and first approach may be preferable if you can apply analytics/aggregation before union.

Hive vs Pig when performing Joins

I have some scripts which process my website's logs. I have loaded this data into multiple tables in Hive. I run these scripts on daily basis to do the analysis of the traffic.
Lately I am seeing that the hive queries which I have written in these scripts is taking too much time. Earlier, it used to take around 10-15 mins to generate the reports, but now it takes hours to do the same.
I did the analysis of the data and its around 5-10% of increase in dataset.
One of my friends suggested me that Hive is not good when it comes to joining multiple hive tables and I should switch my scripts to Pig. Is Hive bad at joining tables when compared to Pig?
Is Hive bad at joining tables
No. Hive is actually pretty good, but sometimes it takes a bit playing around with the query optimizer.
Depending on which version of Hive you use, you may need to provide hints in your query to tell the optimizer to join the data using a certain algorithm. You can find some details about different hints here.
If you're thinking about using Pig, I think your choice should not be motivated only by performance considerations. In my experience there is no quantifiable gain in using Pig, I have used both over the past years, and in terms of performance there is no clear winner.
What Pig gives you however is more transparency when defining what kind of join you want to use instead of relying on some (sometimes obscure) optimizer hints.
In the end, Pig or Hive doesn't really matter, it just depends how you decide to optimize your queries. If you're considering switching to Pig, I would first really analyze what your needs in terms of processing are, as you'll probably fall even in terms of performance. Here is a good post if you want to compare the 2.

Large query, multiple tables, old vs new JOIN syntax

I have a large query that joins around 20 tables (mostly outer joins). It is using the older join syntax with commas and where conditions with (+) for outer joins.
We noticed that it is consuming a lot of server memory. We are trying several things among which one idea is to convert this query to use the newer ANSI syntax, since the ANSI syntax allows better control on the order of JOINs and also specifies the JOIN predicates explicitly as they are applied.
Does converting the query from an older syntax to the newer ANSI syntax help in reducing the amount of data processed, for such large queries spanning a good number of tables?
In my experience, it does not - it generates identical execution plans. That said, the newer JOIN syntax does allow you to things that you can't do with the old syntax. I would recommend converting it for that reason, and for clarity. The ANSI syntax is just so much easier to read (at least for me). Once converted you can then compare execution plans.
DCookie said all there is to say about ANSI syntax.
However, if you outer join 20 tables, it is no wonder you will consume a lot of server memory. Maybe if you cut down your query in smaller subqueries it might improve performance. That way not all tables have to be read in memory and then joined in memory and then filtered and then only the columns you need selected.
Reversing this order will at least save memory, although it doesn't have to improve execution speed.
As DCookie mentioned, both versions should produce identical execution plans. I would start by looking at the current query's execution plan and figuring out what is actually taking up the memory. A quick look at DBMS_XPLAN.DISPLAY_CURSOR output should be a good start. Once you know exactly what part of the query you are trying to improve, then you can analyze if switching to ANSI style joins will do anything to help you reach your end goal.

Schemas and Indexes & Primary Keys : Differences in lookup performance?

I have a database (running on postgres, precisely) , with the following structure :
user1 (schema)
|
- cars (table)
- airplanes (table, again)
...
user2
|
- cars
- airplanes
...
It's clearly not structurized the way classic relational databes should be, but it "just works" as it is now. As you can see, schemas are like primary keys used to identify entries.
In terms of performance -and nothing else-, is it worth rebuilding it so it'll have traditional primary keys (varchar being their type) & clustered indexes instead of schemas ?
From a Performance Perspective, actually from any perspective surely this is a NIGHTMARE, REBUILD!
Without knowing any more about your situation, I guess the answer would be YES, this would effect performance. Ordinarilly simple queries would not only be much more complicated to write and maintain but the db would produce query plans that were significantly more costly to execute.
Edit: I've worked with, and designed, DB's to handle a lot of data in high workload environments (banking and medical) and I have never seen anything like it; well not in the modern world!
So it looks like each user just has their own schema? Often large, large data sets are split up close to this (more often by customer in a lot of business scenarios). It's often a premature optimization because it introduces additional complexity to your application and a single table with a user column would scale to a reasonable number of rows.
However, whether or not you'll gain any performance from combining into a single schema really is determinate on whether or not you do many cross-user queries (in other words, queries that have to cross schemas/tables) and whether the data in each set of tables is exclusive to that user. If you're replicating data from other user's table to another, then you need to at least redesign those tables into a common schema.
I personally try to avoid a per-schema approach under normal circumstances (due to additional maintenance overhead and app complexity), but it has its place. And I'd hardly call this a "nightmare" unless I'm not understanding something correctly.

Oracle Hierarchical Query Performance

We're looking at using Oracle Hierarchical queries to model potentially very large tree structures (potentially infinitely wide, and depth of 30+). My understanding is that hierarchal queries provide a method to write recursively joining SQL but they it does not provide any real performance enhancements over if you were to manually write an equivalent query... is this the case? What sort of experiences have people had, performance wise, with using oracle hierarchical queries?
Well the short answer is that without the hierarchical extension (connect by) you couldn't write a recursive query. You could programmitically issue many queries which were recurisively linked.
The rule of thumb with everything database is, especially oracle, is that if you can issue your result in a single query it will almost always be faster than doing it programatically.
My experiences have been with much smaller sets, so I can't speak for how well heirarchical queries will perform for large sets.
When doing these tree retrievals, you typically have these options
Query everything and assemble the tree on the client side.
Perform one query for each level of the tree, building on what you know that you need from the previous query results
Use the built in stuff Oracle provides (START WITH,CONNECT BY PRIOR).
Doing it all in the database will reduce unnecessary round trips or wasteful queries that pull too much data.
Try partitioning the data within you hierarchical table and then limiting the partition included in the query.
CREATE TABLE
loopy
(key NUMBER, key_hier number, info VARCHAR2, part NUMBER)
PARTITION BY
RANGE (part)
(
PARTITION low VALUES LESS THAN (1000),
PARTITION mid VALUES LESS THAN (10000),
PARTITION high VALUES LESS THAN (MAXVALUE)
);
SELECT
info
FROM
loopy PARTITION(mid)
CONNECT BY
key = key_hier
START WITH
key = <some value>;
The interesting problem now becomes your partitioning strategy. Oracle provides several options.
I've seen that using connect by can be slow but compared to what? There isn't really another option except building a result set using recursive PL/SQL calls (slower) or doing it on your client side.
You could try separating your data into a mapping (hierarchy definition) and lookup tables (the display data) and then joining them back together. I guess I wouldn't expect much of a gain assuming you are getting the hierarchy data from indexed fields but its worth a try.
Have you tried it using the connect by yet? I'm a big fan of trying different variations.

Resources