How to perform subquery in FROM clause with Yii 1.1? - activerecord

I have a fairly large and complex query being generated by Yii. To generate this query we're utilizing CDbCritera::with to eagerly load multiple related models, and we're using multiple scopes to limit the records returned. The query being generated is roughly 700 lines long, but looks something like this:
SELECT `t`.`column1` as `t0_c0`,
`t`.`column2` as `t0_c1`,
`related1`.`column1` as `t1_c0`,
...
`related9`.`column5` as `t9_c4`
FROM `model` `t`
LEFT OUTER JOIN `other_model` `related1`
ON ( `t`.`other_model_id` = `related1`.`id` )
...
LEFT OUTER JOIN `more_models` `related9`
ON ( `t`.`more_models_id` = `related9`.`id` )
WHERE
...big long WHERE clause using all of related1 - related9 to filter model...
LIMIT 10
Our database has a not insignificant amount of data, but not obscene, either. In this case the model table has about 126000 rows, every "related" model is a BELONGS_TO relationship and there is an index on t.XXX_id so the join is fairly trivial. The problem is the complexity of the WHERE clause, possessing multiple COALESCE and IF and CASE clauses. Performing the filter on our 126000 rows is taking 2.6 seconds -- far longer than we would like for an API endpoint.
The WHERE clause is divided into multiple different sections like so:
WHERE
( ... part 1 ... )
AND
( ... part 2 ... )
AND
( ... part 3 ... )
With each part corresponding to one of the scopes, and each part using one or more related models
One of the scopes filters on only a single related model, and in doing so filters our table down from 126000 rows to about 2000 rows. I found experimentally (in MySQL Workbench) that I could get our query from 2.6 seconds to 0.2 seconds by simply doing this:
SELECT `t`.`column1` as `t0_c0`,
`t`.`column2` as `t0_c1`,
`related1`.`column1` as `t1_c0`,
...
`related9`.`column5` as `t9_c4`
FROM
(
SELECT `model`.*
FROM `model`
LEFT OUTER JOIN `other_model`
ON ( `t`.`other_model_id` = `other_model`.`id` )
WHERE
( ... part 1 ... )
) `t`
LEFT OUTER JOIN `other_model` `related1`
ON ( `t`.`other_model_id` = `related1`.`id` )
...
LEFT OUTER JOIN `more_models` `related9`
ON ( `t`.`more_models_id` = `related9`.`id` )
WHERE
( ... part 2 ... )
AND
( ... part 3 ... )
LIMIT 10
This way instead of performing the very complex WHERE clause on all 126000 rows of the original model table, we perform the much simpler (and index-enhanced) WHERE clause on these 126000 rows and then perform the complex WHERE clause on only the 2000 relevant rows. The results of the two queries are identical, but using a subquery in the FROM clause causes it to run 13x faster.
The problem is, I have no idea how to do this in Yii. I know that I can use CDbCommand to build a query and even pass in raw SQL, but what I'll get back is an array of "rows" -- they won't be understood by Yii and properly converted to the right models.
Does Yii's ActiveRecord system have a way to say something like the following?
$criteria = new CDbCriteria;
$criteria->scopes = array("part1");
$subQuery = Model::model()->buildQuery($criteria);
$criteria = new CDbCriteria;
$criteria->scopes = array("part2", "part3");
$fullQuery = $subQuery->findAll($criteria);

Although not a perfect solution, I did find something that's almost as good. Break the original query into two:
Get the IDs or models you wish to select in the FROM subquery
Append a WHERE to the outer query with id in (...)
If anyone is interested I'll hunt down the code I wrote for this to post in the answer as an example, but so far this question has gotten very little attention and once I found a pseudo-decent solution I've moved on.

Related

Multiple consecutive join operations on PySpark

I am running a PySpark application where we are comparing two large datasets of 3GB each. There are some differences in the datasets, which we are filtering via outer join.
mismatch_ids_row = (sourceonedf.join(sourcetwodf, on=primary_key,how='outer').where(condition).select(primary_key)
mismatch_ids_row.count()
So the output of join on count is a small data of say 10 records. The shuffle partition at this point is about 30 which has been counted as amount of data/partition size(100Mb).
After the result of the join, the previous two datasets are joined with the resultant joined datasets to filter out data for each dataframe.
df_1 = sourceonedf.join(mismatch_ids_row, on=primary_key, how='inner').dropDuplicates()
df_2 = sourcetwodf.join(mismatch_ids_row, on=primary_key, how='inner').dropDuplicates()
Here we are dropping duplicates since the result of first join will be double via outer join where some values are null.
These two dataframes are further joined to find the column level comparison and getting the exact issue where the data is mismatched.
df = (df_1.join(df_2,on=some condition, how="full_outer"))
result_df = df.count()
The resultant dataset is then used to display as:
result_df.show()
The issue is that, the first join with more data is using merge sort join with partition size as 30 which is fine since the dataset is somewhat large.
After the result of the first join has been done, the mismatched rows are only 10 and when joining with 3Gb is a costly operation and using broadcast didn't help.
The major issue in my opinion comes when joining two small resultant datasets in second join to produce the result. Here too many shuffle partitions are killing the performance.
The application is running in client mode as spark run for testing purposes and the parameters are sufficient for it to be running on the driver node.
Here is the DAG for the last operation:
As an example:
data1 = [(335008138387,83165192,"yellow","2017-03-03",225,46),
(335008138384,83165189,"yellow","2017-03-03",220,4),
(335008138385,83165193,"yellow","2017-03-03",210,11),
(335008138386,83165194,"yellow","2017-03-03",230,12),
(335008138387,83165195,"yellow","2017-03-03",240,13),
(335008138388,83165196,"yellow","2017-03-03",250,14)
]
data2 = [(335008138387,83165192,"yellow","2017-03-03",300,46),
(335008138384,83165189,"yellow","2017-03-03",220,10),
(335008138385,83165193,"yellow","2017-03-03",210,11),
(335008138386,83165194,"yellow","2017-03-03",230,12),
(335008138387,83165195,"yellow","2017-03-03",240,13),
(335008138388,83165196,"yellow","2017-03-03",250,14)
]
field = [
StructField("row_num",LongType(),True),
StructField("tripid",IntegerType(),True),
StructField("car_type",StringType(),True),
StructField("dates", StringType(), True),
StructField("pickup_location_id", IntegerType(), True),
StructField("trips", IntegerType(), True)
]
schema = StructType(field)
sourceonedf = spark.createDataFrame(data=data1,schema=schema)
sourcetwodf = spark.createDataFrame(data=data2,schema=schema)
They have just two differences, on a larger dataset think of these as 10 or more differences.
df_1 will get rows from 1st sourceonedf based on mismatch_ids_row and so will the df_2. They are then joined to create another resultant dataframe which outputs the data.
How can we optimize this piece of code so that optimum partitions are there for it to perform faster that it does now.
At this point it takes ~500 secs to do whole activity, when it can take about 200 secs lesser and why does the show() takes time as well, there are only 10 records so it should print pretty fast if all are in 1 partition I guess.
Any suggestions are appreciated.
You should be able to go without df_1 and df_2. After the first 'outer' join you have all the data in that table already.
Cache the result of the first join (as you said, the dataframe is small):
# (Removed the select after the first join)
mismatch_ids_row = sourceonedf.join(sourcetwodf, on=primary_key, how='outer').where(condition)
mismatch_ids_row.cache()
mismatch_ids_row.count()
Then you should be able to create a self-join condition. When joining, use dataframe aliases for explicit control:
result_df = (
mismatch_ids_row.alias('a')
.join(mismatch_ids_row.alias('b'), on=some condition...)
.select(...)
)

MDX: add dimension in filter but also show in rows

I have a simple MDX query that filters data to one of two Status values:
SELECT
NON EMPTY
{ [confidentialstring].Members }
ON COLUMNS,
NON EMPTY
{ [APMWORDER].[LEVEL01].Members *
[APMSTDOPR].[LEVEL01].Members *
[AWOUSTAT].[LEVEL01].Members *
[ANOTIFCTN].[LEVEL01].Members
}
ON ROWS
FROM [APM_CP002/APM_CP002_QX006]
WHERE { ( [AWOUSTAT].[E0001] ), ( [AWOUSTAT].[E0002] ) }
But I also need [AWOUSTAT] in my table, to see which status value is actually applicable.
However MDX throws an error if I add it: 'You may not use the same dimension on different axes'. I understand this in principle but not in this application ('filter' is not an axis to me...)
How can I resolve this, without having to create two queries?
Thanx!
Since you cannot have the same dimension on rows and in where clause, and you want to see [AWOUSTAT].[E0001]] , [AWOUSTAT].[E0002] , in your table, I think you could try something like this to start:
SELECT
NON EMPTY
{ [confidentialstring].Members }
ON COLUMNS,
NON EMPTY
{ ( [AWOUSTAT].[E0001]] ), ( [AWOUSTAT].[E0002] ) }
ON ROWS
FROM [APM_CP002/APM_CP002_QX006]
A common technique is to replace the WHERE clause with a FROM clause.
Sample using adventure works:
SELECT
NON EMPTY
{[Measures].[Order Count]} ON COLUMNS
,NON EMPTY
{[Date].[Day of Month].[Day of Month].ALLMEMBERS}
ON ROWS
FROM
(
SELECT
{[Date].[Fiscal Year].&[2011]} ON COLUMNS
FROM [Adventure Works]
)
In the above example, it is equivalent to having a WHERE clause of the Fiscal Year = 2011.
You can see a similar output, whenever you use the SSMS Cube Browser to build queries.
It may look strange to replace WHERE with FROM, BUT, in the in most cases with the Cube, it does provide the required output.

Strange behaviour when using FILTER to filter a different table with no direct relationship?

I have two facts tables, First and Second, and two dimension tables, dimTime and dimColour.
Fact table First looks like this:
and facet table Second looks like this:
Both dim-tables have 1:* relationships to both fact tables and the filtering is one-directional (from dim to fact), like this:
dimColour[Color] 1 -> * First[Colour]
dimColour[Color] 1 -> * Second[Colour]
dimTime[Time] 1 -> * First[Time]
dimTime[Time] 1 -> * Second[Time_]
Adding the following measure, I would expect the FILTER-functuion not to have any affect on the calculation, since Second does not filter First, right?
Test_Alone =
CALCULATE (
SUM ( First[Amount] );
First[Alone] = "Y";
FILTER(
'Second';
'Second'[Colour]="Red"
)
)
So this should evaluate to 7, since only two rows in First have [Alone] = "Y" with values 1 and 6 and that there is no direct relationship between First and Second. However, this evaluates to 6. If I remove the FILTER-function argument in the calculate, it evaluates to 7.
There are thre additional measures in the pbix-file attached which show the same type of behaviour.
How is filtering one fact table which has no direct relationship to a second fact table affecting the calculation done on the second table?
Ziped Power BI-file: PowerBIFileDownload
Evaluating the table reference 'Second' produces a table that includes the columns in both the Second table, as well as those in all the (transitive) parents of the Second table.
In this case, this is a table with all of the columns in dimColour, dimTime, Second.
You can't see this if you just run:
evaluate 'Second'
as when 'evaluate' returns the results to the user, these "Parent Table" (or "Related") columns are not included.
Even so, these columns are certainly present.
When a table is converted to a row context, these related columns become available via RELATED.
See the following queries:
evaluate FILTER('Second', ISBLANK(RELATED(dimColour[Color])))
evaluate 'Second' order by RELATED(dimTime[Hour])
Similarly, when arguments to CALCULATE are used to update the filter context, these hidden "Related" columns are not ignored; hence, they can end up filtering First, in your example. You can see this, by using a function that strips the related columns, such as INTERSECT:
Test_ActuallyAlone = CALCULATE (
SUM ( First[Amount] ),
First[Alone] = "Y",
//This filter now does nothing, as none of the columns in Second
//have an impact on 'SUM ( First[Amount] )'; and the related columns
//are removed by the INTERSECT.
FILTER(
INTERSECT('Second', 'Second')
'Second'[Colour]="Red"
)
)
(See these resources that describe the "Expanded Table"
(this is an alternative but equivalent explanation of this behaviour)
https://www.sqlbi.com/articles/expanded-tables-in-dax/
https://www.sqlbi.com/articles/context-transition-and-expanded-tables/
)

Linq Query Where Contains

I'm attempting to make a linq where contains query quicker.
The data set contains 256,999 clients. The Ids is just a simple list of GUID'S and this would could only contain 3 records.
The below query can take up to a min to return the 3 records. This is because the logic will go through the 256,999 record to see if any of the 256,999 records are within the List of 3 records.
returnItems = context.ExecuteQuery<DataClass.SelectClientsGridView>(sql).Where(x => ids.Contains(x.ClientId)).ToList();
I would like to and get the query to check if the three records are within the pot of 256,999. So in a way this should be much quicker.
I don't want to do a loop as the 3 records could be far more (thousands). The more loops the more hits to the db.
I don't want to grap all the db records (256,999) and then do the query as it would take nearly the same amount of time.
If I grap just the Ids for all the 256,999 from the DB it would take a second. This is where the Ids come from. (A filtered, small and simple list)
Any Ideas?
Thanks
You've said "I don't want to grab all the db records (256,999) and then do the query as it would take nearly the same amount of time," but also "If I grab just the Ids for all the 256,999 from the DB it would take a second." So does this really take "just as long"?
returnItems = context.ExecuteQuery<DataClass.SelectClientsGridView>(sql).Select(x => x.ClientId).ToList().Where(x => ids.Contains(x)).ToList();
Unfortunately, even if this is fast, it's not an answer, as you'll still need effectively the original query to actually extract the full records for the Ids matched :-(
So, adding an index is likely your best option.
The reason the Id query is quicker is due to one field being returned and its only a single table query.
The main query contains sub queries (below). So I get the Ids from a quick and easy query, then use the Ids to get the more details information.
SELECT Clients.Id as ClientId, Clients.ClientRef as ClientRef, Clients.Title + ' ' + Clients.Forename + ' ' + Clients.Surname as FullName,
[Address1] ,[Address2],[Address3],[Town],[County],[Postcode],
Clients.Consent AS Consent,
CONVERT(nvarchar(10), Clients.Dob, 103) as FormatedDOB,
CASE WHEN Clients.IsMale = 1 THEN 'Male' WHEN Clients.IsMale = 0 THEN 'Female' END As Gender,
Convert(nvarchar(10), Max(Assessments.TestDate),103) as LastVisit, ";
CASE WHEN Max(Convert(integer,Assessments.Submitted)) = 1 Then 'true' ELSE 'false' END AS Submitted,
CASE WHEN Max(Convert(integer,Assessments.GPSubmit)) = 1 Then 'true' ELSE 'false' END AS GPSubmit,
CASE WHEN Max(Convert(integer,Assessments.QualForPay)) = 1 Then 'true' ELSE 'false' END AS QualForPay,
Clients.UserIds AS LinkedUsers
FROM Clients
Left JOIN Assessments ON Clients.Id = Assessments.ClientId
Left JOIN Layouts ON Layouts.Id = Assessments.LayoutId
GROUP BY Clients.Id, Clients.ClientRef, Clients.Title, Clients.Forename, Clients.Surname, [Address1] ,[Address2],[Address3],[Town],[County],[Postcode],Clients.Consent, Clients.Dob, Clients.IsMale,Clients.UserIds";//,Layouts.LayoutName, Layouts.SubmissionProcess
ORDER BY ClientRef
I was hoping there was an easier way to do the Contain element. As the pool of Ids would be smaller than the main pool.
A way I've speeded it up for now is. I've done a Stinrg.Join to the list of Ids and added them as a WHERE within the main SQL. This has reduced the time down to a seconds or so now.

Oracle 9i Sub query

Hi Can any one help me out of this query forming logic
SELECT C.CPPID, c.CPP_AMT_MANUAL
FROM CPP_PRCNT CC,CPP_VIEW c
WHERE
CC.CPPYR IN (
SELECT C.YEAR FROM CPP_VIEW_VIEW C WHERE UPPER(C.CPPNO) = UPPER('123')
AND C.CPP_CODE ='CPP000000000053'
and TO_CHAR(c.CPP_DATE,'YYYY/Mon')='2012/Nov'
)
AND UPPER(C.CPPNO) = UPPER('123')
AND C.CPP_CODE ='CPP000000000053'
and TO_CHAR(c.CPP_DATE,'YYYY/Mon') = '2012/Nov';
Please Correct me if i formed wrong query structure, in terms of query Performance and Standards. Thanks in Advance
If you have some indexes or partitioned tables I would not use functions on columns but on variables, to be able to use indexes/select partitions.
Also I use ANSI 92 SQL syntax. You don't specify(or not directly) a join contition between cpp_prcnt and cpp_view so it is actually a cartesian product(cross join)
SELECT C.CPPID, c.CPP_AMT_MANUAL
FROM CPP_PRCNT CC
CROSS JOIN CPP_VIEW c
WHERE
CC.CPPYR IN (
SELECT C.YEAR
FROM CPP_VIEW_VIEW C
WHERE C.CPPNO = '123'
AND C.CPP_CODE ='CPP000000000053'
AND trunc(c.CPP_DATE,'MM')=to_date('2012/Nov','YYYY/Mon')
)
AND C.CPPNO = '123'
AND C.CPP_CODE ='CPP000000000053'
AND trunc(c.CPP_DATE,'MM')=to_date('2012/Nov','YYYY/Mon')
If you show us the definition of cpp_view_view(seems to be a view over cpp_view), the definition(if simple) of CPP_VIEW and what you're trying to achieve, I bet there are more things to be improved/fixed.
There are a couple of things you could improve:
if possible, get rid of the UPPER() in the comparison - this will render any indices useless. If that's not possible, consider a function-based index on UPPER(CPPNO)
do not convert your DATE column to a string to compare it with a string - do it the other way round (i.e. convert your string to a date => only one conversion needed instead of one per table row, use of indices possible)
play around with EXISTS instead of IN, as suggested by Dileep - might be faster

Resources