I'm writing a module to carry out 2-way synchronization between Tasks in a client system, with Task folders on an Exchange server, using Exchange Web Services.
It's working well, but I've come across a potential flaw in my approach.
When a Task change is saved in our system, I then update the Exchange server. This updates the SyncState value in EWS, leaving it out of sync with our local copy of the value. Hence, when we next call syncFolderItems() on EWS, the Task we just updated is returned to the client system as having been modified, and then is updated again (with the same values).
Is there any way to design the synchronization process to work around this?
Keeping a list of local Task Ids updated before calling the sync, which we can then compare to the TaskIds returned from the syncFolderItems() call, and then ignore would work, but doesn't seem particularly slick.
I'm just starting work with Exchange Web Services, so apologies if I'm missing an easy trick here. Any advice on a better approach to achieving 2-way synchronization with EWS would be appreciated.
Related
A have a microservice that needs some data it does not own. It needs a read-only cache of data that is owned by another service. I am looking for guidence on how to implement this.
I dont' want my microserivce to call another microservice. I have too much data that is used in a join for this to be successful. In addition, I don't want my service to be dependent on another service (which may be dependent on another ...).
Currently, I am publishing an event to a queue. Then my service subscribes and maintains a copy of the data. I am haivng problem staying in sync with the source system. Plus, our DBAs are complaining about data duplication. I don't see a lot of informaiton on this topic.
Is there a pattern for this? What the name?
First of all, there are couple of ways to share data and two of them you mention.
One service call another service to get the data when it is required. This is good as you get up to date data and also there is no extra management required on consuming service. Problem is that if you are calling this too many times then other service performance may impact.
Another solution is maintained local copy of that data in consuming service using Pub/Sub mechanism.
Depending on your requirement and architecture you can keep this in actual db of consuming service or some type of cache ( persisted cache)
Here cons is consistency. When working with distributed architecture you will not get strong consistency but you have to depends on Eventual consistency.
Another solution is that and depends on your required you can separate out that tables that needs to join in some separate service. It depends on your use case.
If you still want consistency then at the time when first service call that update the data and then publish. Instead create some mediator component and that will call two service in sync fashion. Here things get complicated as you now try to implement transaction over distributed system.
One another point, when product build around Microservice architecture then it is not only technical move, as a organization and as a team your team needs to understand something that work in Monolith, it is not same in Microservices. DBA needs to understand that part and in Microservices Duplication of data across schema ( other aspect like code) prefer over reusability.
Last but not least, If it is always required to call another service to get data, It is worth checking service boundary as well. It may possible that sometime service needs to merge as business functionality required to stay together.
I have a workflow that requires me to hand a file around my team and each of my team members needs to do something with this document. They have to do it in a certain order and one after another.
The current solution is that I send an email to the first person with this file and wait until I receive the document back. Then I send the received document to the next person and so on...
I already looked at all the connectors, especially the email with options from the outlook connector and the Approvals Connector look promising.
Getting the file into the workflow and attaching it to an email is easy and I am stuck for quite some hours now on how to get the received file back into the workflow. I should add that in the ideal case the file goes directly back into the workflow without taking the detour through my mailbox.
The is a bunch of commercial solutions out there, e.g. Adobe Sign, but i would really like to solve this without having to upload my files to some other service and rely on an other company (other than microsoft obviously).
I would really appreciate any suggestions on how one could solve this task!
Thanks a lot.
Short Answer
You need to have a shared storage that all members of the process can access, the file should then be opened and updated from there
My recommendation is (if your company teams/365 groups are set up well) to just use a specific folder in your team's SharePoint site (O365 group) that will be accessible via teams, a browser, or any of the applications required.
This can then be done in the approval flow you're playing with, or via one or several approval flows within the context of a BPF.
Those methods:
Approval Flow
Business Process Flow (BPF)
Detail
Shared Storage
This won't be hard to sort out, if the people involved are only a few in a larger team, and the data is sensitive, then create a separate folder and restrict access. Otherwise, you should at least restrict write access, to ensure that only the people involved can modify the file.
As mentioned earlier, the only thing that could hold you back is the company's set up with regard to O365 Groups, Azure (and normal) AD groups, and the literal teams. But it really shouldn't be an issue for this.
If there is bad group infrastructure, then it's all good, you can just lean in to that and make another brand new team in Teams. Once you've done that, find the new O365 Group it creates, and then just manage it all from SharePoint (you can even add a tab in the Team client to manage the process!) to ensure that the permissions are just right.
Approval Flow
Build the logic first. It should be relatively simple:
Person A performs their task, they click to say it's done.
Person B. Etc.
Then you can start worrying about the file, and how it's accessed and from where.
This is by far the easiest way to do things, and allows you to keep things as simple as possible. For the logic just plot it out step by step, then once you have that, take a look at it and see where you can economise it, and either loop elements, or use variables to make it not require the specifics that you begin with.
With any luck, you'll soon have it doing most of the work for you. You can even ensure that copies of the file are made at each stage and are then archived, if you like.
Business Process Flow
This is my preferred option because it will codify the process and you can make things however complicated in the flow(s) themselves, separately.
The BPF will ably show the organisation how your team performs the task, ie. Johnny edits, then Billy edits, then Jenna edits. However at each stage (or for bespoke tasks) you can call on different flows to perform whatever tasks you need performed.
There are positives and negatives to this approach, mainly:
Positive - You can set it up without ANY automation, and you can use it to manage your current manual process.
Positive - Later you can start to instill the automations you need to process what is required.
Negative - This is advanced stuff, and it's not only difficult to learn, but it's difficult to get right. That said, the end result will be amazing.
I want to share my final solution based on Eliot Coles answer and lots of internet research.
Basically I automated my mailbox meaning that I use the outlook connector to send and receive mails and handling the attachments between those.
The flow is triggered manually where the user has to enter the email-adresses of all the recipients and select the file to pass around. Then I store the recipients in an array to be able to loop over them later. Additionally an unique ID is generated to identify the emails belonging to this flow later on.
Next there is a loop over all recipients. The file is send to the first recipient in the array and another loop waits for the recipient to reply to the message before continuing with the next one.
Finally a close look at the "receive-loop". This runs until an email with attachment arrives from the recipient. All emails filtered by the ID generated earlier are reteived and if there is one with attachment, this attachment is stored in the file variable. If no email matched the criteria, it is waited for some time and the mailbox is checked again.
At the very end, I sent an email back to myself with the last received file, as the workflow is finished then.
I tried to take a more detailed look at REST - Spring seems to love it so much, and I am confused. REST is fine for basic CRUD operations on "resources", but it seems awkward for applications which need to keep state, such as workflow applications. I read in one of the answers on SO, can't find it now, that the state should be kept on the client. This seems strange: what prevents then the client from cooking up a request, claiming he is in a state which he is not in? A way around this could be that the server sends to the client his next state signed, and the client then uses this signed state on his next request to the server. If anyone has seen a "RESTful workflow application", is this how things actually get done?
REST = Representational STATE Transfer. Client and server exchange the state of a resource. So basically it's kept on the server and updated (/created/deleted) by the client.
REST isn't awkward at all for workflow applications, however you may define that. The most difficult part when designing REST applications is designing the representations and most of all the resources. A resource isn't merely an entity from the database.
As #NeilMcGuigan has mentioned, the RestBucks sample application deals with workflows. There's a video on the SpringSourceDev Channel on YouTube, where Oliver Gierke presents the application.
This is more a theoretical question than a practical one, but given I undestand the principles of SOA I am still a bit unsure about if this can be applied to any app.
The usual example is where a client wants to know something from a server thus we implement a service that can provide that information given a client request, it can be stateless or statefull, etc.
But what happens when we want to be notified when something happens on the server, maybe we call a service to register a search and want to be notified when a new item arrives to the server that matches or search.
Of course that can be implemented using polling and leverage that using long timeouts, but I can not see a way in the usual protocols to receive events from the server without making a call to ask.
If you can point me to an example, or tell me an architecture that could support then you have made my day.
Have you considered pub-sub (ie; WS-Eventing, WS-Notification)? These are the usual means to pushing "stuff" to interested consumers/subscribers.
You want to use a Publish-Subscribe design. If you are using WCF checkout Programming WCF by Juval Lowery. In the appdendix he shows how to build a Pub-Sub system that is actually fully Per-Call. It doesn't even rely on CallbackContracts and keeping long running Channels open and so doesn't require any reconnection logic when communication is broken...let alone the need for any polling.
I'm new to Exchange (2007) development so please bear with me. :-). There appear to be a myriad of technologies for Exchange development -- the latest being Exchange Web Services -- and it's related Managed API. I need to write a program that can -- if necessary -- run on the Exchange servers -- to scan people's mailboxes for the purpose of purging messages that meet various criteria (irrelevant for this discussion).
It is my understanding that most of the other technologies -- WebDav, MAPI, CDO -- are now deprecated with respect to Exchange 2007 and Exchange 2010. So since this is a greenfield application, I decided to use the Exchange Web Services Managed API.
I'm concerned about the number of items I can scan per hour. Since it is web services based there is a network hop involved. So I'd like to run this utility on the server with whom I am commnunicating. Am I correct that I have to talk to a "Hub" server?. I'm using Auto Discovery and it appears to resolve to a "hub" server no matter which mail server contains the actual message store I'm scanning.
When pulling multiple items down -- using ExchangeService.FindItems and specifying a page size of 500 -- I get pretty good throughput from my workstation to the hub server. I was able to retrieve 22,000 mail items in 47 seconds. That seems reasonable. However, turns out that not all properties are "bound" when retrieved that way. Certain properties -- like ToRecipients and CcReipients -- are not filled in. You have to explicitly bind them (individually) -- with a call to
Item.Bind(Server, Item.Id)
This is a separate round-trip to the server and this drops throughput down from about 460 items/second to 3 items per second -- which is unworkable.
So -- a few other questions. Is there any way to either force the missing properties to be bound during the call to FindItems? Failing that, is there a way to bind multiple items at once?
Finally, am I right in choosing Exchange Web Services at all for this type of work. I love the simplicity of the programming model and would not like to move to another technology if it is (a) more complex or (b) deprecated. If another technology will do this job better, and it is not deprecated, than I would consider using it if necessary. Your opinion and advice is appreciated.
You can user the service to load many properties for many items in one call to the server - it is designed exactly for your problem. It is just unfortunate that the Managed API documentation is still pretty thin on the ground.
results = folder.findItems... (or whatever find call you are making)
service.LoadPropertiesForItems(results, propertySet);
Where property set is something like:
PropertySet s = new PropertySet(BasePropertySet.IdOnly, ItemSchema.Subject, customDefinitions);
Use the various xSchema classes to load in the specific fields you want to minimise load if you are fetching lots of records back.