I have a fairly simple Linq query (simplified code):
dim x = From Product In lstProductList.AsParallel
Order By Product.Price.GrossPrice Descending Select Product
Product is a class. Product.Price is a child class and GrossPrice is one of its properties. In order to work out the price I need to use Session("exchange_rate").
So for each item in lstProductList there's a function that does the following:
NetPrice=NetPrice * Session("exchange_rate")
(and then GrossPrice returns NetPrice+VatAmount)
No matter what I've tried I cannot access session state.
I have tried HttpContext.Current - but that returns Nothing.
I've tried Implements IRequiresSessionState on the class (which helps in a similar situation in generic http handlers [.ashx]) - no luck.
I'm using simple InProc session state mode. Exchange rate has to be user specific.
What can I do?
I'm working with:
web development, .Net 4, VB.net
Step-by-step:
page_load (in .aspx)
dim objSearch as new SearchClass()
dim output = objSearch.renderProductsFound()
then in objSearch.renderProductsFound:
lstProductList.Add(objProduct(1))
...
lstProductList.Add(objProduct(n))
dim x = From Product In lstProductList.AsParallel
Order By Product.Price.GrossPrice Descending Select Product
In Product.Price.GrossPrice Get :
return me.NetPrice+me.VatAmount
In Product.Price.NetPrice Get:
return NetBasePrice*Session("exchange_rate")
Again, simplified code, too much to paste in here. Works fine if I unwrap the query into For loops.
I'm not exactly sure how HttpContext.Current works, but I wouldn't be surprised if it would work only on the main thread that is processing the HTTP request. This would mean that you cannot use it on any other threads. When PLINQ executes the query, it picks some random threads from the thread pool and evaluates the predicates in the query using these threads, so this may be the reason why your query doesn't work.
If the GrossPrice property needs to access only a single thing from the session state, it should be fairly easy to change it to a method and pass the value from the session state as an argument:
Dim rate = Session("exchange_rate")
Dim x = From product In lstProductList.AsParallel
Order By product.Price.GetGrossPrice(rate) Descending
Select product
Depending on where you use x later, you could also add a call to ToList to force the evaluation of the query (otherwise it may be executed lazily at some later time), but I think the change I described above should fix it.
You should really read the values out of session state and into local variables that you know you need within the LINQ statement. Otherwise you're actually accessing the NameValueCollection instance every single time for every single element in every single thread when the value is essentially constant.
Related
I’m experimenting with scripting a batch of OmniFocus tasks in JXA and running into some big speed issues. I don't think the problem is specific to OmniFocus or JXA; rather I think this is a more general misunderstanding of how getting objects works - I'm expecting it to work like a single SQL query that loads all objects in memory but instead it seems to do each operation on demand.
Here’s a simple example - let’s get the names of all uncompleted tasks (which are stored in a SQLite DB on the backend):
var tasks = Application('OmniFocus').defaultDocument.flattenedTasks.whose({completed: false})
var totalTasks = tasks.length
for (var i = 0; i < totalTasks; i++) {
tasks[i].name()
}
[Finished in 46.68s]
Actually getting the list of 900 tasks takes ~7 seconds - already slow - but then looping and reading basic properties takes another 40 seconds, presumably because it's hitting the DB for each one. (Also, tasks doesn't behave like an array - it seems to be recomputed every time it's accessed.)
Is there any way to do this quickly - to read a batch of objects and all their properties into memory at once?
Introduction
With AppleEvents, the IPC technology that JavaScript for Automation (JXA) is built upon, the way you request information from another application is by sending it an "object specifier," which works a little bit like dot notation for accessing object properties, and a little bit like a SQL or GraphQL query.
The receiving application evaluates the object specifier and determines which objects, if any, it refers to. It then returns a value representing the referred-to objects. The returned value may be a list of values, if the referred-to object was a collection of objects. The object specifier may also refer to properties of objects. The values returned may be strings, or numbers, or even new object specifiers.
Object specifiers
An example of a fully-qualified object specifier written in AppleScript is:
a reference to the name of the first window of application "Safari"
In JXA, that same object specifier would be expressed:
Application("Safari").windows[0].name
To send an IPC request to Safari to ask it to evaluate this object specifier and respond with a value, you can invoke the .get() function on an object specifier:
Application("Safari").windows[0].name.get()
As a shorthand for the .get() function, you can invoke the object specifier directly:
Application("Safari").windows[0].name()
A single request is sent to Safari, and a single value (a string in this case) is returned.
In this way, object specifiers work a little bit like dot notation for accessing object properties. But object specifiers are much more powerful than that.
Collections
You can effectively perform maps or comprehensions over collections. In AppleScript this looks like:
get the name of every window of Application "Safari"
In JXA it looks like:
Application("Safari").windows.name.get()
Even though this requests multiple values, it requires only a single request to be sent to Safari, which then iterates over its own windows, collecting the name of each one, and then sends back a single list value containing all of the name strings. No matter how many windows Safari has open, this statement only results in a single request/response.
For-loop anti-pattern
Contrast that approach to the for-loop anti-pattern:
var nameOfEveryWindow = []
var everyWindowSpecifier = Application("Safari").windows
var numberOfWindows = everyWindowSpecifier.length
for (var i = 0; i < numberOfWindows; i++) {
var windowNameSpecifier = everyWindowSpecifier[i].name
var windowName = windowNameSpecifier.get()
nameOfEveryWindow.push(windowName)
}
This approach may take much longer, as it requires length+1 number of requests to get the collection of names.
(Note that the length property of collection object specifiers is handled specially, because collection object specifiers in JXA attempt to behave like native JavaScript Arrays. No .get() invocation is needed (or allowed) on the length property.)
Filtering, and why your code example is slow
The really interesting part of AppleEvents is the so-called "whose clause". This allows you provide criteria with which to filter the objects from which the values will be returned from.
In the code you included in your question, tasks is an object specifier that refers to a collection of objects that have been filtered to only include uncompleted tasks using a whose clause. Note that this is still just reference at this point; until you call .get() on the object specifier, it's just a pointer to something, not the thing itself.
The code you included then implements the for-loop anti-pattern, which is probably why your observed performance is so slow. You are sending length+1 requests to OmniFocus. Each invocation of .name() results in another AppleEvent.
Furthermore, you're asking OmniFocus to re-filter the collection of tasks every time, because the object specifier you're sending each time contains a whose clause.
Try this instead:
var taskNames = Application('OmniFocus').defaultDocument.flattenedTasks.whose({completed: false}).name.get()
This should send a single request to OmniFocus, and return an array of the names of each uncompleted task.
Another approach to try would be to ask OmniFocus to evaluate the "whose clause" once, and return an array of object specifiers:
var taskSpecifiers = Application('OmniFocus').defaultDocument.flattenedTasks.whose({completed: false})()
Iterating over the returned array of object specifies and invoking .name.get() on each one would likely be faster than your original approach.
Answer
While JXA can get arrays of single properties of collections of objects, it appears that due to an oversight on the part of the authors, JXA doesn't support getting all of the properties of all of the objects in a collection.
So, to answer you actual question, with JXA, there is not a way to read a batch of objects and all their properties into memory at once.
That said, AppleScript does support it:
tell app "OmniFocus" to get the properties of every flattened task of default document whose completed is false
With JXA, you have to fall back to the for-loop anti-pattern if you really want all of the properties of the objects, but we can avoid evaluating the whose clause more than once by pulling its evaluation outside of the for loop:
var tasks = []
var taskSpecifiers = Application('OmniFocus').defaultDocument.flattenedTasks.whose({completed: false})()
var totalTasks = taskSpecifiers.length
for (var i = 0; i < totalTasks; i++) {
tasks[i] = taskSpecifiers[i].properties()
}
Finally, it should be noted that AppleScript also lets you request specific sets of properties:
get the {name, zoomable} of every window of application "Safari"
But there is no way with JXA to send a single request for multiple properties of an object, or collection of objects.
Try something like:
tell app "OmniFocus"
tell default document
get name of every flattened task whose completed is false
end tell
end tell
Apple event IPC is not OOP, it’s RPC + simple first-class relational queries. AppleScript obfuscates this, and JXA not only obfuscates it even worse but cripples it too; but once you learn to see through the faux-OO syntactic nonsense it makes a lot more sense. This and this may give a bit more insight.
[ETA: Omni recently implemented its own embedded JavaScriptCore-based scripting support in its apps; if JS is your thing you might find that a better bet.]
I'm relatively comfortable with EWS programming and Exchange schemas, but running into an interesting problem to handle.
I have a propertyset, asking for:
ItemClass
DateTimeReceived
LastModifiedTime
Size
Every Item in the AllItems folder at the root.
I get the result set, and then attempt Linq queries against the set, particular to the DateTimeReceived. All Items don't have a DateTimeReceived returned by the server, and they except. I'm trying a...
long msgCount = (from msg in allItems
where !msg.DateTimeReceived.Equals(null)
select msg).Count();
... which (IMO) should return the count of allItems that have a DateTimeReceived. However, the property isn't null; it's just not there, throwing an exception.
I'm trying to avoid iterating through the set one by one, trying each record. Anyone have a thought or experience?
Thanks TTY for the input that definitely lead to the following code that returns what I need. (Still in final testing)
List<EWS.Item> noReceivedProperty = inputlist.Where(m => (m.GetType().GetProperty("DateTimeReceived") != null)).ToList<EWS.Item>();
Then of course, take noReceivedProperty.Count or such as needed.
I have an EntityCollection ec in C# which has been populated with all Accounts.
Now I want another List or EntityCollection from ec which has all the accounts with status active.
I am using Linq for the same.
But both form of LINQ returns a an empty result while ec has 354 number of records
var activeCRMEC = (from cl in ec.Entities
where cl.Attributes["statecode"].ToString()=="0"
select cl);
OR
var activeCRMEC = ec.Entities.Where(x => x.Attributes["statecode"].ToString() == "0");
Each time the resultset is empty and I am unable to iterate over it. And 300 or so accounts are active, I have checked.
Same thing happens when I use some other attribute such as name etc.
Please be kind enough to point out my mistake.
You can Generate Early Bound Classes to write Linq Queries.
or Else
You can Write Linq Queries Using Late Bound Using OrganizationServiceContext Class.
For Your Reference:
OrganizationServiceContext OrgServiceCOntext = new OrganizationServiceContext(service);
var RetrieveAll = OrgServiceCOntext.CreateQuery("account").
ToList().Where(w => (w.GetAttributeValue<OptionSetValue>("statecode").Value ==0)).Select(s=>s);
I'll give you a few hints, and then tell you what I'm guessing your issue is.
First, use early bound entities. If you've never generated them before, use the earlybound generator, you'll save yourself a lot headaches.
Second, if you can't use early bound entities, use the GetAttribute() method on the Entity class. It'll convert types for you, and handle null reference issues.
Your LINQ expressions look to be correct, so either the ec.Entities doesn't have any entities in it that match the criteria of "statecode" equaling 0, or you possibly have some differed execution occurring on your IEnumerables. Try calling ToList() on the activeCRMEC immediately after the LINQ statement to ensure that is not your issue.
The statecode is an OptionSetValue, you should cast it in this way
((OptionSetValue)cl.Attributes["statecode"]).Value == 0
or
cl.GetAttributeValue<OptionSetValue>("statecode").Value == 0
Both ways are valid and you should ask for the Value that it is an int.
Hope this can help you.
My collegue helped me with starting programming in c# although I had no experience but I like it. All went well until I came across some problems we both can't fix. He uses SQL himself but started me up with LINQ.
To do a LINQ-query I use this object : _oDBConnection (in clsApplication.cs)
So when opening the programm this object is built. But it creates some problems:
When saving a new object (putting data into table), I cannot load those values with a query. I need to restart the programm.
When running 2 instances of the programm, one is not getting the latest values when changed in the other (but it is showing the new ones but not the changed ones!)
According to these problems I can only conclude that when I call clsApplication._oDBConnection.tblTAble a second time it is not relinking again to the db but is giving me the old db-states back.
This is the code he built:
public static DBReservationDataContext _oDBConnection;
private static frmMain _fMain;
public clsApplication()
{
Thread.CurrentThread.Name = "main";
clsErrorLog.ErrorLocation = "C:\\Software\\ErrorLog";
clsErrorLog.setPassword("*****");
clsErrorLog.LockApplication += new clsErrorLog.dLockApplication(lockApplication);
_oDBConnection = new DBReservationDataContext();
_fMain = new frmMain();
_fMain.Show();
}
What can I do to fix this problem?
Example:
although present in the database, it crashes on this query because the entity with id == iID is not found. But the iID is correct and it does exist in the database. The query will work after closing and restarting the programm. Then the clsApplication is called again.
public clsReservationDetail(int iID)
:this()
{
_oReservationDetail = (from oReservationDetailQuery in clsApplication._oDBConnection.tblReservationDetails
where oReservationDetailQuery.ID == iID
select oReservationDetailQuery).First();
}
thx in advance
Your data context will have a Refresh method which will clear any cached results, and should allow your query to complete with no problems
The static keyword makes it so that you have one reference per AppDomain. That is the wrong way to use DataContext instances.
Each instance of DataContext tracks the objects it has seen. This is for consistency. If you get a Customer instance with CustomerID = 4 from one query, you should get the same Customer instance from another query that returns the CustomerID = 4 record.
If you want to see the changes in the database, you must
1) Tell the datacontext to stop tracking changes. This must be done before the first query and makes the datacontext instance unable to SubmitChanges (since it can't track them anymore).
OR
2) Tell the datacontext to Refresh each instance you suspect has changed. If you do that, you should specify how resolve the conflict between your local changes and the remote changes - the simplest way of resolving this conflict is to have no local changes.
OR
3) (The right way) Make a new DataContext instance and load the record with that!
Also note: Since DataContext implements IDisposable, you are required to call Dispose when you are done with each instance even when exceptions occur. The using block is a good way to get that to happen.
I'm trying to improve the performance of a website written in classic ASP.
It supports multiple languages the problem lies in how this was implemented. It has the following method:
GetTranslation(id,language)
Which is called all over the shop like this:
<%= GetTranslation([someid],[thelanguage]) %>
That method just looks up the ID and language in SQL and returns the translation. Simple.
But incredibly inefficient. On each page load, there's around 300 independent calls to SQL to get an individual translation.
I have already significantly improved performance for many scenarios:
A C# tool that scans the .asp files and picks up references to GetTranslation
The tool then builds up a "bulk-cache" method that (depending on the page) takes all the IDs it finds and in one fell swoop caches the results in a dictionary.
The GetTranslation method was then updated to check the dictionary for any requests it has and only go to SQL if it's not already in there (and cache it's own result if necessary)
This only goes so far.
When IDs of translations are stored in the database I can't pick these up (particularly easily).
Ideally the GetTranslation method would, on each call, build up one big SQL string that would only be executed at the end of the page request.
Is this possible in ASP? Can I have the result of a <%= ... %> to be a reference to something that is later resolved?
I would also sincerely appreciate any other creative ways I might improve the performance of this old, ugly beast.
I don't think you can do delayed execution in Classic ASP. As for suggestions on improving the performance. You can have a class like this :
Class TranslationManager
Private Sub Class_Initialize
End Sub
Private Sub Class_Terminate
End Sub
Private Function ExistsInCache(id, language)
ExistsInCache = _
Not IsEmpty(Application("Translation-" & id & "-" & language))
End Function
Private Function GetFromCache(id, language)
GetFromCache = Application("Translation-" & id & "-" & language)
End Function
Private Function GetFromDB(id, language)
//'GET THE RESULT FROM DB
Application("Translation-" & id & "-" & language) = resultFromDB
GetFromDB = resultFromDB
End Function
Public Default Function GetTranslation(id, language)
If ExistsInCache(id, language) Then
GetTranslation = GetFromCache(id, language)
Else
GetTranslation = GetFromDB(id, language)
End If
End Function
End Class
And use it like this in your code
Set tm = New TranslationManager
translatedValue = tm([someid], [thelanguage])
Set tm = Nothing
This would definitely reduce the calls to DB. But you need to be very careful about how much data you put into the application object. You don't wanna run out of memory. It's best you also track how long the translations stay in the memory and have them expired (deleted from the Application object) when they were not accessed for some time.
We use a i18n class with a namespace and language attribute in oure e-commerce system. The class has a default function called 'translate' which basically preforms a dictionary lookup. This dictionary is loaded using the memento pattern from a text file containing all the translations for the namespace and language.
The skeletons for these translations files are generated by a custom tool (written in vbscript actually) which parses the ASP's for i18n($somestring) calls. The filenames are based on the namespace and language e.g. "shoppingcart_step_1_FR.txt". The tool can actually update/extent existing translation files when we add new translatable strings to the ASP code, which is very important for maintenance.
The performance overhead for using this method is minimal. Due to the segmentation our largest translation file contains about 200 translatable strings (including static image urls). Loading it per request has very little effect on the performance. I guess one could cache the translation dictionaries in the application object using some third-party treadsafe dictionary, yet IMHO this isn't worth the trouble.
An extra tip, use variable replacement in your string to improve translatability.
For example use:
<%=replace(i18n("Buy $productname"), "$productname", product.name)%>
instead of
<%=i18n("Buy")%> <%=product.name%>
The first method is much more flexible for the translators. A lot of languages have different sentence structures.
<%= x %> simply translates into Response.Write(x). There's no way to make that deferred.
In fact, Classic ASP has no way to make anything deferred, as far as I can remember.
You've already done a lot in terms of writing the caching tool. Next step would be writing a tool to convert these ASP pages to ASP.NET.
Your cache is the best saving you can make here, you could do away with the complication of the .net pre-cacher and just have each call to GetTranslate check your dictionary for its entry, if not there then it can fetch it and cache it in Application space. That would be lightening fast. Then there just the problem of refreshing the Cache every now and then but that will be done for you every 24 hours ish when the worker process gets refreshed.
If you need it to be more up to date than that then you could pull out all your references as you do using your .net cacher. Then you could create a new function to go get all the entries for a given set of ids, write a call to this function near the top of your ASP pages for each page which would call your DB with one SQL string stashing the results in a local dictionary. Then modify GetTranslation to use the values from this dictionary. Would need to be updated but that could be part of your build process or just a job that runs every hour/night.
The approach I use in my own CMS is to load all the static Language strings from the Database into the Application object at start up using unique keys based on the variable name, language ID and CMS instance. Then I use page level caching into an array as the page gets created so repeatedly used variable gets cached and avoids lots of trips to the Application object where possible. I use an array instead of a dictionary as the way my CMS function each page is created in sections with each section being isolated from each other so I'd be creating multiple dictionaries per page which is undesirable.
Obviously the viability of this solution depends entirely on the number of variables you have and the number of translations you have plus how many variables you need to retrieve on each page.