I'm building an object model layer on top of EWS, and am exposing an object that is an EmailMessage, internally, that has several standard properties which I load on startup, and allow the user to have different "views" on that object (ViewAsEntity1, ViewAsEntity2) which loads other custom properties on demand (some are Extended Properties, some Internet Headers).
The problem is that I call Item.Bind() with one PropertySet (the standard properties), but when the user asks to ViewAsEntity1, I call ServiceObject.Load() with the new PropertySet, which either means I lose the original set of properties, or I re-retrieve them.
So my question are:
1) If I call ServiceObject.Load twice, with the same properties in the PropertySet, does Exchange really send the same data over, or does it assume I already have it cached (if it is unchanged), and doesn't send it again? And if so, does EWS know not to clobber the existing data in the PropertyBag?
2) If the answer to #1 is no, what's the best way to avoid expensive EWS calls again and again? I was hoping to keep my Object Model layer thin and simply refer calls to properties into the internal EWS Item object, but if #1 is wrong, I guess I'll have to copy all the data to my objects manually. Is there a better way?
I'm using .NET 4.5, Managed EWS 2.0 and Exchange 2010 SP1
I have taken a look at the code for the EWS Managed API. It seems that every time you call ´ServiceObject.Load()´, the managed EWS API will send an HTTP request to your Exchange server. There is no caching built into the managed API when using ´ServiceObject.Load()´ which is good in my opinion. How would the API know when the item on the server has changed and you need a refresh?
There may be some caching on the Exchange server - who knows - but you still have to make the HTTP call.
If you want to spare some HTTP calls you have to build caching into your own client.
Related
I have what I hope is a quick question. We are trying to track down why a specific EWS/REST API URL is being returned in OWA by Office Add-ins. Are you able to share the mechanism by which the REST and EWS URLs are determined when running Office.context.mailbox.ewsUrl (or .restUrl). How does the framework determine the right URL to use when in OWA. It doesn't appear to make any extra calls to Exchange. The specific JS in use is outlook-web-16.01.js. It looks like when the Extensions load a service.svc action called GetExtensibilityContext is used and this returns the EWS and REST URLs. However, we were hoping for some more information about what properties in Exchange would impact which URL is used here.
What we are seeing external URLs returned that are only set on four servers where other servers in the environment do not have an external URL set, including the servers where the mailbox in question resides.
Is it designed that if there's an external URL set anywhere in the environment, that is what is returned for the EWS/REST URLs?
Whenever an add-in is launched, it creates an instance of an office object inside the global window object.
Many of the common attributes are stored in that instance of the office itself.
Like ItemId of the item on which the add-in has been opened.
For instance, you can check all the office attributes in the console itself.
Switch the Javascript context to the iframe (since add-ins are loaded inside an iFrame).
Then console log the window.office.context.mailbox object. You'll find all the attributes stored with respect to the item there.
Hope it answers your query.
I'm looking into Volt as an option for building an Admin interface to our REST API. The API is a separate application. I would like the Admin application to persist data to the API but also store it's own data that is irrelevant to the API (such as admin users and notes on the API data objects) locally.
Is there a way to sync each local change in the Admin with our remote API, like a callback, for example? Or do I need to wait until the Data Provider API is ready as mentioned in the most recent Volt blog post (as of writing)?
So this is a fairly common thing, so I think long term the solution will be to support multiple stores in an app and have a REST data provider that you can extend. However that might be a while before that's ready. In the mean time, you can always load and save data via tasks. (I realize its not ideal, but it will work right now) Let me know if you need more info on using tasks to load and save. I'll add the REST data provider to the TODO list.
Do I need to send individual entity updates to WebAPI, or can I POST an array of them and send them all at once? It seems like a dumb question, but I can't find anything that says one way or another.
Brad has a blog post that talks about implementing batching support in Web API.
Also, Web API samples project on codeplex has a sample for doing batching in web API hosted on asp.net.
It seems like WEB API 2 has support for this
From the site (Web API Request Batching):
Request batching is a useful way of minimizing the number of messages
that are passed between the client and the server. This reduces
network traffic and provides a smoother, less chatty user interface.
This feature will enable Web API users to batch multiple HTTP requests
and send them as a single HTTP request.
There are a number of samples for different scenarios on this page.
https://aspnetwebstack.codeplex.com/wikipage?title=Web+API+Request+Batching
You will have to create an action that accepts a collection of items.
If all you have is an action that accepts a single item than you need to send separate requests.
With batching always think about how you would report the failures and whether a failing of a single item should invalidate the whole batch.
I am trying to make a web app using ExpressJS and Coffeescript that pulls data from Amazon, LastFM, and Bing's web API's.
Users can request data such as the prices for a specific album from a specific band, upcoming concert times and locations for a band, etc... stuff like that.
My question is: should I make these API calls client-side using jQuery and getJSON or should they be server-side? I've done client-side requests; how would I even make an API call from the server side?
I just want to know what the best practice is, and also if someone could point me in the right direction for making server-side API requests, that would be very helpful.
Thanks!
There's are two key considerations for this question:
Do calls incur any data access? Are the results just going to be written to the screen?
How & where do you plan to handle errors? How do you handle throttling?
Item #2 is really important here because web services go down all of the time for a whole host of reasons. Your calls to Bing, Amazon & Last FM will fail probably 1% or 0.1% of the time (based on my experiences here).
To make requests users server-side JS you probably want to take a look at the Request package on NPM.
It's often good to abstract away your storage and dependent services to isolate changes and offer a consolidated and consistent web api for your application. But sometimes, if you have a good hypermedia web api (RESTful responses link to other resources), you could reference a resource link from another service in the response from your service (ex: SO request could reference gravatar image/resource of user). There's no one size fits all - it depends on whether you want to encapsulate the dependency or integrate with it.
It might be beneficial to make the web-api requests from your service exposed via expressjs as your own web-apis.
Making http web-api requests is easy from node. Here's another SO post covering that:
HTTP GET Request in Node.js Express
well, the way you describe it I think you may want to fetch data from amazon, lastfm and so on, process it with node, save it in your database and provide your own api.
you can use node's http.request() to fetch the data and build your own rest api with express.js
We are in the process of designing/creating restful web services that will be consumed client side using XHR calls from various web pages. These web pages will contain components that will be populated by the data retrieved from the web services.
My question is, is it best to design the return data of the web services to match specifically what the client side components will require for each page? Therefore, only one XHR call will be required to retrieve all the data necessary to populate a specific AJAX component or to update a specific page. Or is it more advisable to develop generic web services, that match for instance a database schema, and will require multiple XHR calls client side to retrieve all the data to populate an AJAX component? The second approach seems to lead to some messy coding to chain calls together to retrieved all the data required before updating an AJAX component.
Hopefully this makes sense.
You should always design services based on what they are to provide. Unless you need a service that retrieves rows from the database, don't create one. You may find you need a service that returns complete business entities - they may be in multiple tables.
Or, you may just need a service to provide data for UI controls. In that case, that's what you should do. You may later find that two operations are returning almost the same data, so you may refactor that into one operation that returns the data for both.
My general rule of thumb is to do what ever is the smallest to transmit over the ajax call. In theory, the more data that is sent to the client the slower the update process. This, of course, would necessarily mean specific services for specific pages.