I wrote a LINQ query that performs an orderby on an Entity Framework Core (.NET Core 2.0.7) database context using the Sum extension method. It works fine on a small sample database, but when running against a larger database ~100,000 entries, it becomes significantly slower and uses more CPU. I have pasted the relevant code below. Is there a way to perform the Sum faster? (it's essentially a weighted average on an arbitrary number of tuples).
var iqClientIds = (from stat in context.Set<EFClientStatistics>()
join client in context.Clients
on stat.ClientId equals client.ClientId
group stat by stat.ClientId into s
orderby s.Sum(cs => (cs.Performance * cs.TimePlayed)) / s.Sum(cs => cs.TimePlayed) descending
select new
{
s.First().ClientId,
})
.Skip(start)
.Take(count);
Thanks!
EF Core 2 handles GroupJoin with translation to SQL and your query can be converted to use this:
var iqClientIds = (from client in context.Clients
join stat in context.Set<EFClientStatistics>() on client.ClientId equals stat.ClientId into sj
orderby sj.Sum(s => (s.Performance * s.TimePlayed)) / sj.Sum(s => s.TimePlayed) descending
select sj.First().ClientId
)
.Skip(start)
.Take(count);
NOTE: I simplified the result (select) to not create an anonymous object for a single value.
I am facing very long latencies on Apache Spark when running some SQL queries. In order to simplify the query, I run my calculations in a sequential manner: The output of each query is stored as a temporary table (.registerTempTable('TEMP')) so it can be used in the following SQL query and so on... But the query takes too much time, while in 'Pure Python' code, it takes just a few minutes.
sqlContext.sql("""
SELECT PFMT.* ,
DICO_SITES.CodeAPI
FROM PFMT
INNER JOIN DICO_SITES
ON PFMT.assembly_department = DICO_SITES.CodeProg """).registerTempTable("PFMT_API_CODE")
sqlContext.sql("""
SELECT GAMMA.*,
(GAMMA.VOLUME*GAMMA.PRORATA)/100 AS VOLUME_PER_SUPPLIER
FROM
(SELECT PFMT_API_CODE.* ,
SUPPLIERS_PROP.CODE_SITE_FOURNISSEUR,
SUPPLIERS_PROP.PRORATA
FROM PFMT_API_CODE
INNER JOIN SUPPLIERS_PROP ON PFMT_API_CODE.reference = SUPPLIERS_PROP.PIE_NUMERO
AND PFMT_API_CODE.project_code = SUPPLIERS_PROP.FAM_CODE
AND PFMT_API_CODE.CodeAPI = SUPPLIERS_PROP.SITE_UTILISATION_FINAL) GAMMA """).registerTempTable("TEMP_ONE")
sqlContext.sql("""
SELECT TEMP_ONE.* ,
ADCP_DATA.* ,
CASE
WHEN ADCP_DATA.WEEK <= weekofyear(from_unixtime(unix_timestamp())) + 24 THEN ADCP_DATA.CAPACITY_ST + ADCP_DATA.ADD_CAPACITY_ST
WHEN ADCP_DATA.WEEK > weekofyear(from_unixtime(unix_timestamp())) + 24 THEN ADCP_DATA.CAPACITY_LT + ADCP_DATA.ADD_CAPACITY_LT
END AS CAPACITY_REF
FROM TEMP_ONE
INNER JOIN ADCP_DATA
ON TEMP_ONE.reference = ADCP_DATA.PART_NUMBER
AND TEMP_ONE.CodeAPI = ADCP_DATA.API_CODE
AND TEMP_ONE.project_code = ADCP_DATA.PROJECT_CODE
AND TEMP_ONE.CODE_SITE_FOURNISSEUR = ADCP_DATA.SUPPLIER_SITE_CODE
AND TEMP_ONE.WEEK_NUM = ADCP_DATA.WEEK_NUM
""" ).registerTempTable('TEMP_BIS')
sqlContext.sql("""
SELECT TEMP_BIS.CSF_ID,
TEMP_BIS.CF_ID ,
TEMP_BIS.CAPACITY_REF,
TEMP_BIS.VOLUME_PER_SUPPLIER,
CASE
WHEN TEMP_BIS.CAPACITY_REF >= VOLUME_PER_SUPPLIER THEN 'CAPACITY_OK'
WHEN TEMP_BIS.CAPACITY_REF < VOLUME_PER_SUPPLIER THEN 'CAPACITY_NOK'
END AS CAPACITY_CHECK
FROM TEMP_BIS
""").take(100)
Could anyone highlight (if there are any) the best practices for writing pyspark SQL queries on Spark?
Does it make sense that locally on my computer the script is much faster than on the Hadoop cluster?
Thanks in advance
You should cache your intermediate results, what is the data source?
can you retrieve only relevant data from it or only relevant columns. There are many options you should provide more info about your data.
I am trying to compare two tables (i.e values, count, etc..) in linq to sql but I am not getting the way to achieve it. I tried the following,
Table1.Any(i => i.itemNo == Table2.itemNo)
It gives error. Could you please help me?
Thanks in Advance.
how about
var isDifferent =
Table1.Zip(Table2, (j, k) => j.itemNo == k.itemMo).Any(m => !m);
EDIT
if Linq-To-Sql does not support Zip.
var one = Table1.ToList();
var two = Table2.ToList();
var isDifferent =
one.Zip(two, (j, k) => j.itemNo == k.itemMo).Any(m => !m);
if the tables are vary large this could cause performance problems. In that case you will need a much more sophisticated solution, if so, please ask.
EDIT2
If the tables are very large you don't want to get all the data from the server and hold it memory. Additionaly, Linq and SQL server do not garauntee the order of the rows unless you specify an order in the query. This becomes espcially relavent for large result sets returned by a multi processor server where the effects of parallelism are likely to come into play.
I suggest that Linq-to-Sql doesen't really cater well for your scenario so you will have to help it out using ExecuteQuery somthing like this.
string zipQuery =
#"SELECT TOP 1
1
FROM
[Table1] [one]
WHERE
NOT EXISTS (
SELECT * FROM [Table2] [two] WHERE [two].[itemNo] = [one].[itemNo]
)
UNION ALL
SELECT
1
FROM
[Table2] [two]
WHERE
NOT EXISTS (
SELECT * FROM [Table1] [one] WHERE [one].[itemNo] = [two].[itemNo]
)
UNION ALL
SELECT 0";
var isDifferent = context.ExecuteQuery<int>(zipQuery).Single() == 1;
This will do the select on the server without returning lots of data to the client but, I think you will agree is much more complicated.
EDIT3
Okay, the zip approach should be fine for 1000 rows. I've read your comment and I suggest changing the code accordingly.
var one = Table1.ToList();
var two = Table2.ToList();
var isDifferent =
one.Count != two.Count ||
one.Zip(two, (o, t) => o.itemNo == k.itemNo).Any(m => !m);
You should probably consider putting an order by on the list retrievers, like this.
var one = Table1.OrderBy(o => o.itemNo).ToList();
Strictly, the results of a Linq-to-Sql come back in any order unless an order is specified.
Does anyone have any tips for calculating percentages in Linq to Entities?
I'm guessing that there must be a more efficient way than returning 2 results and calculating in memory. Perhaps an inventive use of let or into?
EDIT
Thanks Mark for your comment, here is a code snippet, but I think this will result in 2 database hits:
int passed = (from lpt in this.PushedLearnings.Select(pl => pl.LearningPlanTask)
where lpt.OnlineCourseScores.Any(score => score.ActualScore >= ((lpt.LearningResource.PassMarkPercentage != (decimal?)null) ?lpt.LearningResource.PassMarkPercentage : 80))
select lpt).Count();
int total = (from lpt in this.PushedLearnings.Select(pl => pl.LearningPlanTask)
select lpt).Count();
double percentage = passed * 100 / total;
If you use LINQ to Entities and write something along the lines of select x * 100.0 / y in your query then this expression will be converted to SQL and run in the database. It will be efficient.
how set sort_area_size in oracle 10g and what size should be as i have more than 2.2m rows in single table. and please tell me the suggested size of SORT_AREA_RETAINED_SIZE
as my queries are too much slow they takes more than 1 hours to complete. (mostly)
please suggest me the way by which i can optimize my queries and tune the database oracle 10g
thanks
updated with query
the query is
SELECT A.TITLE,C.TOWN_VILL U_R,F.CODE TOWN_CODE,F.CITY_TOWN_MAKE,A.FRM,A.PRD_CODE,A.BR_CODE,A.SIZE_CODE ,B.PRICES,
A.PROJECT_YY,A.PROJECT_MM,d.province ,D.BR_CODE BRANCH_CODE,D.STRATUM,L.LSM_GRP LSM,
SUM(GET_FRAC_FACTOR_ALL_PR_NEW(A.FRM,A.PRD_CODE,A.BR_CODE,A.SIZE_CODE,A.PROJECT_YY,A.PROJECT_MM,A.FRAC_CODE ,B.PRICES,A.QTY_USED,A.VERIF_CODE, A.PACKING_CODE, J.TYPE ,'R') )
* MAX(D.UNIVERSE) / MAX(E.SAMPLE) /1000000 MARKET , D.UNIVERSE ,E.SAMPLE
FROM A2_FOR_CPMARKETS A,
BRAND J,
PRICES B,CP_SAMPLE_ALL_MONTHS C ,
CP_LSM L,
HOUSEHOLD_GL D,
SAMPLE_CP_ALL_MONTHS E ,
City_Town_ALL F
WHERE A.PRD_CODE = B.PRD_CODE
AND A.BR_CODE = B.BR_CODE
AND DECODE(A.SIZE_CODE,NULL,'L',A.SIZE_CODE) = B.SIZE_CODE -- for unbranded loose
AND DECODE(B.VAR_CODE,'X','X',A.VAR_CODE) = B.VAR_CODE
AND DECODE(B.COL_CODE,'X','X',A.COL_CODE) = B.COL_CODE
AND DECODE(B.PACK_CODE,'X','X',A.PACKING_CODE) = B.PACK_CODE
AND A.project_yy||A.project_MM BETWEEN B.START_DATE AND B.END_DATE
AND A.PRD_CODE=J.PRD_CODE
AND A.BR_CODE=J.BR_CODE
AND A.FRM = C.FRM
AND A.PROJECT_YY=L.YEAR
AND A.frm=L.FORM_NO
AND C.TOWN_VILL= D.U_R
AND C.CLASS = D.CLASS
AND D.TOWN=F.GRP
AND D.TOWN = E.TOWN_CODE
AND A.PROJECT_YY = E.PROJECT_YY
AND A.PROJECT_MM = E.PROJECT_MM
AND A.PROJECT_YY = C.PROJECT_YY
AND A.PROJECT_MM = C.PROJECT_MM
-- FOR HOUSEJOLD_GL
AND A.PROJECT_YY = D.YEAR
AND A.PROJECT_MM = D.MONTH
-- END HOUSEHOLD_GL
AND C.TOWN_VILL = E.TOWN_VILL
AND C.CLASS = E.CLASS
AND C.TOWN_VILL = F.TOWN_VILL
AND C.TOWN_CODE=F.CODE
AND (DECODE(e.PROJECT_YY,'1997','1','1998','1','1999','1','2000','1','2001','1','2002','1','2') = F.TYP )
GROUP BY A.TITLE,C.TOWN_VILL,F.CODE ,F.CITY_TOWN_MAKE,A.FRM,A.PRD_CODE,A.BR_CODE,A.SIZE_CODE ,B.PRICES,
A.PROJECT_YY,A.PROJECT_MM,d.province,D.BR_CODE ,D.STRATUM,L.LSM_GRP ,
UNIVERSE ,E.SAMPLE
![alt text][1]
[1]: http://C:\Documents and Settings\Hussain\My Documents\My Pictures\explain plan.jpg
Check here for Oracle Documentation for SORT_AREA_SIZE. You can use alter session set sort_area_size=10000 command to modify this for the session and alter system for system. It is the same way for SORT_AREA_RETAINED_SIZE.
Is you entire table (with 2.2 m rows) fetched in the result set? Is there some sort operation in it?
There could be some other reasons for the query to perform badly. Can you share the query and explain plan?
When you run an execution plan for the query using the DBMS_Xplan.Display method oracle will estimate (usually pretty reasonably) what size of temporary tablespace storage you would need to execute it.
2.2 Million rows may be irrelevant to the sort size by the way. the memory required for aggregate operations such as MAX and SUM are more related to the size of the result set than to the size of the source data.
Providing a link to a jpg file stored on your pc does not count as having provided an execution plan, btw.
A.project_yy||A.project_MM BETWEEN B.START_DATE AND B.END_DATE
You know we have DATE datatypes in databases, right ? Using the incorrect datatypes makes it harder for Oracle to determine data distributions, predicate selectivity and appropriate query plans