How to benchmark single TypoSript Object generation? - performance

I would like to benchmark single TypoScript object generation to control the performance, is it possible, probably, with some stdWrap methods ?
Example of TS objects, which I would like to benchmark :
Test 1
page.10 = RECORDS
page.10 {
tables = pages
source = 1
dontCheckPid = 1
conf.pages = TEXT
conf.pages.field = title
}
Test 2
page.20 = CONTENT
page.20 {
table = tt_content
select {
pidInList = 0
recursive = 99
where = uid = 1
}
}
I need each object generation time and quantity of fired queries.

I guess it could be done via Extension. I guess there is a possibility to hook in (or xclass) the Database Layer (like DBAL does). In your extension you could then just test the different TypoScript setups via $this->cObj->cObjGetSingle($this->conf['test1'],$this->conf['test1.'],'test1');
Perhaps have a look at t3lib_timeTrack, may be it is enough what is tracked there. But AFAIK everything which is tracked is available via Admin-Panel (check all checkboxes).

Related

Getting max value on server (Entity Framework)

I'm using EF Core but I'm not really an expert with it, especially when it comes to details like querying tables in a performant manner...
So what I try to do is simply get the max-value of one column from a table with filtered data.
What I have so far is this:
protected override void ReadExistingDBEntry()
{
using Model.ResultContext db = new();
// Filter Tabledata to the Rows relevant to us. the whole Table may contain 0 rows or millions of them
IQueryable<Measurement> dbMeasuringsExisting = db.Measurements
.Where(meas => meas.MeasuringInstanceGuid == Globals.MeasProgInstance.Guid
&& meas.MachineId == DBMatchingItem.Id);
if (dbMeasuringsExisting.Any())
{
// the max value we're interested in. Still dbMeasuringsExisting could contain millions of rows
iMaxMessID = dbMeasuringsExisting.Max(meas => meas.MessID);
}
}
The equivalent SQL to what I want would be something like this.
select max(MessID)
from Measurement
where MeasuringInstanceGuid = Globals.MeasProgInstance.Guid
and MachineId = DBMatchingItem.Id;
While the above code works (it returns the correct value), I think it has a performance issue when the database table is getting larger, because the max filtering is done at the client-side after all rows are transferred, or am I wrong here?
How to do it better? I want the database server to filter my data. Of course I don't want any SQL script ;-)
This can be addressed by typing the return as nullable so that you do not get a returned error and then applying a default value for the int. Alternatively, you can just assign it to a nullable int. Note, the assumption here of an integer return type of the ID. The same principal would apply to a Guid as well.
int MaxMessID = dbMeasuringsExisting.Max(p => (int?)p.MessID) ?? 0;
There is no need for the Any() statement as that causes an additional trip to the database which is not desirable in this case.

Select Count very slow using EF with Oracle

I'm using EF 5 with Oracle database.
I'm doing a select count in a table with a specific parameter. When I'm using EF, the query returns the value 31, as expected, But the result takes about 10 seconds to be returned.
using (var serv = new Aperam.SIP.PXP.Negocio.Modelos.SIP_PA())
{
var teste = (from ens in serv.PA_ENSAIOS_UM
where ens.COD_IDENT_UNMET == "FBLDY3840"
select ens).Count();
}
If I execute the simple query bellow the result is the same (31), but the result is showed in 500 milisecond.
SELECT
count(*)
FROM
PA_ENSAIOS_UM
WHERE
COD_IDENT_UNMET 'FBLDY3840'
There are a way to improve the performance when I'm using EF?
Note: There are 13.000.000 lines in this table.
Here are some things you can try:
Capture the query that is being generated and see if it is the same as the one you are using. Details can be found here, but essentially, you will instantiate your DbContext (let's call it "_context") and then set the Database.Log property to be the logging method. It's fine if this method doesn't actually do anything--you can just set a breakpoint in there and see what's going on.
So, as an example: define a logging function (I have a static class called "Logging" which uses nLog to write to files)
public static void LogQuery(string queryData)
{
if (string.IsNullOrWhiteSpace(queryData))
return;
var message = string.Format("{0}{1}",
queryData.Trim().Contains(Environment.NewLine) ?
Environment.NewLine : "", queryData);
_sqlLogger.Info(message);
_genLogger.Trace($"EntityFW query (len {message.Length} chars)");
}
Then when you create your context point to LogQuery:
_context.Database.Log = Logging.LogQuery;
When you do your tests, remember that often the first run is the slowest because the server has to actually do the work, but on the subsequent runs, it often uses cached data. Try running your tests 2-3 times back to back and see if they don't start to run in the same time.
I don't know if it generates the same query or not, but try this other form (which should be functionally equivalent, but may provide better time)
var teste = serv.PA_ENSAIOS_UM.Count(ens=>ens.COD_IDENT_UNMET == "FBLDY3840");
I'm wondering if the version you have pulls data from the DB and THEN counts it. If so, this other syntax may leave all the work to be done at the server, where it belongs. Not sure, though, esp. since I haven't ever used EF with Oracle and I don't know if it behaves the same as SQL or not.

Plugin performance in Microsoft Dynamics CRM 2013/2015

Time to leave the shy mode behind and make my first post on stackoverflow.
After doing loads of research (plugins, performance, indexes, types of update, friends) and after trying several approaches I was unable to find a proper answer/solution.
So if possible I would like to get your feedback/help in a Microsoft Dynamics CRM 2013/2015 plugin performance issue (or coding technique)
Scenario:
Microsoft Dynamics CRM 2013/2015
2 Entities with Relationship 1:N
EntityA
EntityB
EntityB has the following columns:
Id | EntityAId | ColumnDemoX (decimal) | ColumnDemoY (currency)
Entity A has: 500 records
Entity B has: 150 records per each Entity A record. So 500*150 = 75000 records.
Objective:
Create a Post Entity A Plugin Update to "mimic" the following SQL command
Update EntityB
Set ColumnDemoX = (some quantity), ColumnDemoY = (some quantity) * (some value)
Where EntityAId = (some id)
One approach could be:
using (var serviceContext = new XrmServiceContext(service))
{
var query = from a in serviceContext.EntityASet
where a.EntityAId.Equals(someId)
select a;
foreach (EntityA entA in query)
{
entA.ColumnDemoX = (some quantity);
serviceContext.UpdateObject(entA);
}
serviceContext.SaveChanges();
}
Problem:
The foreach for 150 records in the post plugin update will take 20 secs or more.
While the
Update EntityB Set ColumnDemoX = (some quantity), ColumnDemoY = (some quantity) * (some value) Where EntityAId = (some id)
it will take 0.00001 secs
Any suggestion/solution?
Thank you all for reading.
H
You can use the ExecuteMultipleRequest, when you iterate the 150 entities, save the entities you need to update and after that call the request. If you do this, you only call the service once, that's very good for the perfomance.
If your process could be bigger and bigger, then you should think making it asynchronous as a plug-in or a custom activity workflow.
This is an example:
// Create an ExecuteMultipleRequest object.
requestWithResults = new ExecuteMultipleRequest()
{
// Assign settings that define execution behavior: continue on error, return responses.
Settings = new ExecuteMultipleSettings()
{
ContinueOnError = false,
ReturnResponses = true
},
// Create an empty organization request collection.
Requests = new OrganizationRequestCollection()
};
// Add a UpdateRequest for each entity to the request collection.
foreach (var entity in input.Entities)
{
UpdateRequest updateRequest = new UpdateRequest { Target = entity };
requestWithResults.Requests.Add(updateRequest);
}
// Execute all the requests in the request collection using a single web method call.
ExecuteMultipleResponse responseWithResults =
(ExecuteMultipleResponse)_serviceProxy.Execute(requestWithResults);
Few solutions comes to mind but I don't think they will please you...
Is this really a problem ? Yes it's slow and database update can be so much faster. However if you can have it as a background process (asynchronous), you'll have your numbers anyway. Is it really a "I need this numbers in the next second as soon as I click or business will go down" situation ?
It can be a reason to ditch 2013. In CRM 2015 you can use a calculated field. If you need this numbers only to show up in forms (eg. you don't use them in reporting), you could also do it in javascript.
Warning this is for the desesperate call. If you really need your update to be synchronous, immediate, you can't use calculated fields, you really know what your doing etc... Why not do it directly in the database? I know this is a very bad advice. There are a lot of reason not to do it this way (you can read a few here). It's unsupported and if you do something wrong it could go really bad. But if your real situation is as simple as your example (just a calculated field, no entity creation, no relation modification), you could do it this way. You'll have to consider many things: you won't have any audit on the fields, no security, caching issues, no modified by, etc. Actually I pretty much advise against this solution.
1 - Put it this logic to async workflow.
OR
2 - Don't use
serviceContext.UpdateObject(entA);
serviceContext.SaveChanges();.
Get all the records (150) from post stage update the fields and ExecuteMultipleRequest to update crm records in one time.
Don't send update request for each and every record

Sitecore7 LinqHelper.CreateQuery Buggy?

This is more of a clarification type question rather than actual problem regarding LinqHelper.CreateQuery method.
So,
This method has 3 overloads. The 2 in question here, are:
1.LinqHelper.CreateQuery<SearchResultItem>(searchContext, searchStringModel)
2.LinqHelper.CreateQuery<SearchResultItem>(searchContext, searchStringModel, startLocationItem) [I haven't used any additional context here so used the default null]
Now,
In order to search for items with in a specific location of the content tree ( for example under a particular folder you have 1000 items) I can use method 1 using the query:
query = "location:{FOLDER_GUID};+custom:my_filed_name|bla_bla"
Which works perfectly.
But (from what I understood from the method signature is that) I should also be able to use method 2 like the following:
SitecoreIndexableItem folderID = SitecoreIndexableItem)contextDatabase.GetItem({FOLDER_GUID});
var index = ContentSearchManager.GetIndex(new SitecoreIndexableItem(Sitecore.Context.Item));
using (var context = index.CreateSearchContext())
{
List<SearchStringModel> searchStringModel = new List<SearchStringModel>();
searchStringModel.Add(new SearchStringModel("my_field_name", "bla_bla"));
List<Sitecore.Data.Items.Item> resultItems = LinqHelper.CreateQuery(context, searchStringModel, folderID).Select(toItem => toItem.GetItem()).ToList();
}
Problem is for the above method (method 2) the searching works fine, what doesn't work is the "startLocationItem" (folderID in this case).
FOR EXAMPLE,
IF in my entire sitecore tree has total 3 items containing "my_filed_name=bla_bla"
BUT, only 1 item contains "my_filed_name=bla_bla" in the Folder ({FOLDER_GUID}, "the perticular folder" in this case)
THEN,
Method 1 returns 1 item (WHICH IS CORRECT)
BUT, Method 2 returns 3 items, despite "startLocationItem = {FOLDER_GUID} ... (WHICH I DONT THINK IS CORRECT)
Question is :
1. What is the exact purpose of "startLocationItem" in Method 1 ?
2. And what's the benefit of using "location" filter or "startLocationItem for method 2" ?
LinqHelper is an internal helper class and should not be used in normal operation. It is to help the Sitecore UI talk to the search back-end. Its syntax could be changed at any time so could potentially break things based on it and it is also not documented.
You would be better to convert your query into a normal Linq query ie
using (var context = index.CreateSearchContext)
{
context.GetQueryable<SearchResultItem>().Where(x =>x.Paths.Contains(ID.Parse("your GUID Here")))
}
The 'location' in the LinqHelper string is equivalent to the Paths (or _path) field stored in the index.
This field contains a list of all the parent items of an item, held as a list of GUIDs.
By filtering by _path you restrict the query to a certain node of the tree without effecting the score, for example:
/home (id:1234)
/animals (id:5678 = path:1234 / 5678
/cats (id:1111) = path: 1234 / 5678 / 1111
/dogs (id:2222) = path: 1234 / 5678 / 2222
/cars (id:4567) = path: 1234 / 4567
/sports (id:3333) = path: 1234 / 4567 / 3333
If you filter on animals (ie 5678) you restrict the search only that item and its children.
Using a filter means you can restrict the context of a search without that part effecting the scoring of the main query, so you would end up with:
using (var context = index.CreateSearchContext)
{
context.GetQueryable<SearchResultItem>().Where(x =>Name.Contains("Exciting"))
.Filter(y => y.Paths.Contains(ID.Parse("your GUID Here")
}
This would search only inside the part of the tree you have filtered by for where name contains 'exciting'.
Hope that helps :)

Entity Framework, Table Per Type and Linq - Getting the "Type"

I have an Abstract type called Product, and five "Types" that inherit from Product in a table per type hierarchy fashion as below:
I want to get all of the information for all of the Products, including a smattering of properties from the different objects that inherit from products to project them into a new class for use in an MVC web page. My linq query is below:
//Return the required products
var model = from p in Product.Products
where p.archive == false && ((Prod_ID == 0) || (p.ID == Prod_ID))
select new SearchViewModel
{
ID = p.ID,
lend_name = p.Lender.lend_name,
pDes_rate = p.pDes_rate,
pDes_details = p.pDes_details,
pDes_totTerm = p.pDes_totTerm,
pDes_APR = p.pDes_APR,
pDes_revDesc = p.pDes_revDesc,
pMax_desc = p.pMax_desc,
dDipNeeded = p.dDipNeeded,
dAppNeeded = p.dAppNeeded,
CalcFields = new DAL.SearchCalcFields
{
pDes_type = p.pDes_type,
pDes_rate = p.pDes_rate,
pTFi_fixedRate = p.pTFi_fixedRate
}
}
The problem I have is accessing the p.pTFi_fixedRate, this is not returned with the Products collection of entities as it is in the super type of Fixed. How do I return the "super" type of Products (Fixed) properties using Linq and the Entity Framework. I actually need to return some fields from all the different supertypes (Disc, Track, etc) for use in calculations. Should I return these as separate Linq queries checking the type of "Product" that is returned?
This is a really good question. I've had a look in the Julie Lerman book and scouted around the internet and I can't see an elegant answer.
If it were me I would create a data transfer object will all the properties of the types and then have a separate query for each type and then union them all up. I would insert blanks into the DTO properies where the properties aren't relevant to that type. Then I would hope that the EF engine makes a reasonable stab at creating decent SQL.
Example
var results = (from p in context.Products.OfType<Disc>
select new ProductDTO {basefield1 = p.val1, discField=p.val2, fixedField=""})
.Union(
from p in context.Products.OfType<Fixed>
select new ProductDTO {basefield1 = p.val1, discField="", fixedField=p.val2});
But that can't be the best answer can it. Is there any others?
So Fixed is inherited from Product? If so, you should probably be querying for Fixed instead, and the Product properties will be pulled into it.
If you are just doing calculations and getting some totals or something, you might want to look at using a stored procedure. It will amount to fewer database calls and allow for much faster execution.
Well it depends on your model, but usually you need to do something like:
var model = from p in Product.Products.Include("SomeNavProperty")
.... (rest of query)
Where SomeNavProperty is the entity type that loads pTFi_fixedRate.

Resources