Detecting calendar-event mail items in Office365 REST Mail API - exchange-server

I am successfully retrieving user emails from our Exchange Online server, via REST requests and the ADAL library. We have been retrieving and processing calendar-event emails, and their associated calendar events, which are generated by Outlook, GMail/Google-Calendar, iPad, iPhone and Android devices.
We have been looking in the ClassName property for "meeting.request" or "meeting.cancelled", but those values were removed a week ago and have not returned. We have now been looking for a non-Null MeetingMessageType property (MeetingRequest or MeetingCancelled), but as of today, those properties have also been removed. This is incredibly valuable interop data but I don't know where to look next.
How can I associate a retrieved json Message object from a user's mailbox or a shared mailbox, with an (Exchange...) associated Calendar event? We can process Meeting creations, invitations, acceptances etc. with message items which we then purge; Whereas, querying the calendars for new and updated events is much much more intensive, since we certainly can't purge calendar events off the calendar as we process them!
Can I query the calendar for associated message Ids? I can't imagine this would be possible to do for every message.
Thanks!
Edit: #Venkat Thanks. Mail items are infinitely more process-able than emergent calendar-event standards. As an Exchange dev, I have to ask-- do you really need an example of how I can process a mail-bound event better as a mail item rather than a calendar event item? Ok that's cool, here is one:
One thing we are doing is cc/bcc-ing mail/mtg-requests to specific mailboxes for processing (or using client and server rules to accomplish the same thing). We can then poll individual mailboxes, shared mailboxes, and/or collections of mailboxes to auto-respond or not... and to move to specific calendars or not, and to redirect to specific users or not, and to change header information during routing for further category classification or not, and to even replace recipients/attendees or not etc. etc. To do the same thing with REST calendar requests, we'd lose all server rules automation, all client rules automation, procedural auto-respond, all headers manipulation (data-insertion/extraction), etc. We're just trying to push events to a cloud app, for certain collections of users, using shared mailboxes which redirect to specific daemon accounts, which hold calendars for specific subsets of our users/clients.
Like everyone else, we are trying to integrate with cloud apps. So we need procedural parsing, data-manipulation, and pushing of both mail and calendar items. So, for one thing, we have the massive advantages of server mail-processing rules, client/user mail rules, mail header modifications (easy item data modification), mail auto-respond control, and blind recipients. Calendar events don't get any of those things. For a second thing, we have a much more robust mail folders taxonomy than calendar(s) taxonomy (which is almost non-existent). For a third thing, Calendar event mail items are user-specific and have less persistent value than shared calendar events. Finally, if we're processing mail items any way-- why not at least have an eventId for events? Why take out ALL interop information? Having an eventId completely eliminates the need for a query against a calendar endpoint returning multiple items, and adds no addition queries against a mail endpoint.
Google includes an attached ics. Even if you eliminate the event item attachment from the API mail item, I don't see why you have to remove the eventId. Processing calendar events by mail is nothing new, but we have to have a data-binding between the two objects, to do it. That is all.
My Exchange Server still knows when a mail item is a calendar event. It just won't tell ~me~, any more, if I ask it over REST. So, as a brutish work-around I can set up a mail rule to add a category of "api_calendarEvent" for all incoming messages that are of type "Meeting Request". Then, after making a REST call for mail items, I can parse categories to manually repopulate a class property. But why remove the attachment, classname, MeetingMessageType, and EventId altogether from the mail item? Even if I made a server rule to re-categorize certain mail items in certain mailboxes as calendar events, and was able to know when to poll a calendar to get event details-- would I always know what calendar to poll, to find that event? All we'd need to avoid blind polling across multiple calendars, is for you to retain the EventId and/or ClassName. Then we'd also have massive automation of calendar processing again, which has currently been removed from the API.
Thanks!

Thanks for the detailed response to my comment! Your scenario is something we wish to support. As part of schema clean up, we removed event ID and meeting message type from message, as it was being included for every message. For calendar invites and responses, we plan to add back 2 properties:
1. A navigation link to the related event, so you can click to the event, and take actions on it, if you have calendar permissions.
2. A calendar response type e.g. Meeting Accepted, Meeting Declined etc., so you know what type of the page you have received.
We are currently working on the design and we don't have the exact timeline to share. But, we will update our documentation as soon as we have this API available.
[UPDATE] We now return calendar event invites and responses as EventMessage which is a subclass of Message. This entity includes a property called MeetingMessageType and a navigation link to the corresponding Event on the user's calendar. See below for an example:
{
#odata.context: "https://outlook.office365.com/api/v1.0/$metadata#Users('<snipped>')/Messages/$entity",
#odata.type: "#Microsoft.OutlookServices.EventMessage",
#odata.id: "https://outlook.office365.com/api/v1.0/Users('<snipped>')/Messages('<snipped>')",
#odata.etag: "<snipped>",
Id: "<snipped>",
ChangeKey: "<snipped>",
Categories: [ ],
DateTimeCreated: "2015-04-08T14:37:55Z",
DateTimeLastModified: "2015-04-08T14:37:55Z",
Subject: "<snipped>",
BodyPreview: "",
Body: {
ContentType: "HTML",
Content: "<snipped>"
},
Importance: "Normal",
HasAttachments: false,
ParentFolderId: "<snipped>",
From: {
EmailAddress: {
Address: "<snipped>",
Name: "<snipped>"
}
},
Sender: {
EmailAddress: {
Address: "<snipped>",
Name: "<snipped>"
}
},
ToRecipients: [{
EmailAddress: {
Address: "<snipped>",
Name: "<snipped>"
}
}],
CcRecipients: [ ],
BccRecipients: [ ],
ReplyTo: [ ],
ConversationId: "<snipped>",
DateTimeReceived: "2015-04-08T14:37:55Z",
DateTimeSent: "2015-04-08T14:37:48Z",
IsDeliveryReceiptRequested: null,
IsReadReceiptRequested: false,
IsDraft: false,
IsRead: false,
WebLink: "<snipped>",
MeetingMessageType: "MeetingRequest",
Event#odata.navigationLink: "https://outlook.office365.com/api/v1.0/Users('<snipped>')/Events('<snipped>')"
}
Please let me know if our proposed changes meet your requirements, if you have any questions or need more info.
Thanks,
Venkat

Related

Best practice for subscribe and publish architecture for chat-like app

I'd like to know what best practices exist around subscribing to changes and publishing them to the user. This is a pretty broad and vaguely worded question. Therefore, allow me to elaborate on this using an example.
Imagine the following (simplified) chat-like application:
The user opens the app and sees the home screen.
On this home screen a list of chat-groups is fetched and displayed.
Each chat-group has a list of users (members).
The user can view this list of members.
Each user/member has at least a first name available.
The user can change its name in the settings.
And now the important part: When this name is changed, every user that is viewing the list of members, should see the name change in real-time.
My question concerns the very last point.
Let's create some very naive pseudo-code to simulate such a thing.
The client should at least subscribe to something. So we could write something like this:
subscribeToEvent("userChanged")
The backend should on its part, publish to this event with the right data. So something like this:
publishDataForEvent("userChanged", { userId: "9", name: "newname" } )
Of course there is a problem with this code. The subscribed user now gets all events for every user. Instead it should only receive events for users it is interested in (namely the list of members it is currently viewing).
Now that is the issue I want to know more about. I could think of a few solutions:
Method 1
The client subscribes to the event, and sends with it, the id of the group he is currently viewing. Like so for example:
subscribeToEvent("userChanged", { groupId: "abc" })
Consequently, on the backend, when a user changes its name, the following should happen:
Fetch all group ids of the user
Send out the event using those group ids
Something like this:
publishDataForEvent("userChanged", { userId: "9", name: "newname" }, { groupIds: ["abc", "def" })
Since the user is subscribed to a group with id "abc" and the backend publishes to several groups, including "abc", the user will receive the event.
A drawback of this method is that the backend should always fetch all group ids of the user that is being changed.
Method 2
Same as method 1. But instead of using groupIds, we will use userIds.
subscribeToEvent("userChanged", { myUserId: "1" })
Consequently, on the backend, when a user changes its name, the following should happen:
Fetch all the user ids that relate to the user (so e.g. friendIds based on the users he shares a group with)
Send out the event using those friendIds
Something like this:
publishDataForEvent("userChanged", { userId: "xyz", name: "newname" }, { friendIds: ["1", "2" })
An advantage of this is that the subscription can be somewhat more easily reused. Ergo the user does not need to start a separate subscription for each group he opens, since he is using his own userId instead of the groupId.
Drawback of this method is that it (like with method 1 but probably even worse) potentially requires a lot of ids to publish the event to.
Method 3
This one is just a little different.
In this method the client subscribes on multiple ids.
An example:
On the client side the application gathers all users that are relevant to the current user. For example, that can be done by gathering all the user ids of the currently viewed group.
subscribeToEvent("userChanged", { friendIds: ["9", "10"] })
At the backend the publish method can be fairly simple like so:
publishDataForEvent("userChanged", { userId: "9", name: "newname" }, { userId: "9" } )
Since the client is subscribed to user with userId "9", amongst several users, the client will receive this event.
Advantage of this method is that the backend publish method can be held fairly simple.
Drawback of this is that the client needs quite some logic to subscribe to the right users.
I hope that the examples made the question more clear. I have the feeling I am missing something here. Like, major chat-app companies, can't be doing it one of these ways right? I'd love to hear your opinion about this.
On a side note, I am using graphql as a backend. But I think this question is general enough to not let that play a role.
The user can change its name in the settings.
And now the important part: When this name is changed, every user that is viewing the list of members, should see the name change in real-time.
I assume the user can change his name via a FORM. The contents of that form will be send with a HTTP-Reuqest to a backand script that will do the change in a DB like
update <table> set field=? where userid=?
Preferred
This would be the point where that backend script would connect to your web socket server and send a message like.
{ opcode:'broadcast', task:'namechange', newname='otto' ,userid='47111' }
The server will the broadcast to all connected clients
{task:'namechange', newname='otto' ,userid='4711' }
All clients that have a relationship to userid='4711' can now take action.
Alternative 1
If you cant connect your backend script to the web socket server the client might send { opcode:'broadcast', task:'namechange', newname='otto' ,userid='47111' }
right before the FORM is trasnmitted to the backend script.
This is shaky because if anything goes wrong in the backend, the false message is already delivered, or the client might die before the message goes out, then no one will notice the change.

Getstream.io: Message format for deleted activities via socket/faye

I am implementing something like Facebook reactions via getstream.io.
Posting and removing activities ("reactions") works fine.
Basics:
While implementing the socket feature (faye) of getstream to reflect feed changes in realtime, I saw that the format of a socket message for new activities differs from the one for deleted activities.
Example having ONE reaction each in deleted and new:
{
"deleted": [
"d5b1aee0-5a1a-11e6-8080-80015eb61bf9",
"49864f80-5a19-11e6-8080-80015eb61bf9",
"47fe7700-5a19-11e6-8080-80015eb61bf9",
"4759ab80-5a19-11e6-8080-80015eb61bf9",
"437ce680-5a19-11e6-8080-80015eb61bf9"
],
"new": [
{
"actor": "user:55d4ab8a11234359b18f06f6:Manuel Reil",
"verb": "support",
"object": "control:56bf2fb884e5c0756e910755",
"target": null,
"time": "2016-08-04T11:48:23.168000",
"foreign_id": "55d4ab8a11234359b18f06f6:support:56bf2fb884e5c0756e910755",
"id": "58d9c000-5a39-11e6-8080-80007c3c41d8",
"to": [],
"origin": "control:56bf2fb884e5c0756e910755"
}
],
"published_at": "2016-08-04T11:48:23.546708+00:00"
}
I subscribe to the aggregated feed that follows a flat feed.
I add and remove activities via the flat feed.
Subscriptions to the flat and the aggregated feed both return the same message when adding and removing activities.
Challenges I am facing:
When I remove ONE activity (via foreign_id) - why do appear 5 ids in the deleted array?
I need to have the foreign_id to reflect changes in the app while digesting the socket message from gestream.io. This works fine for new activities as the full object is sent (see example above). However, for the deleted activities they are missing as just an array of ids is sent.
Potential approaches:
Can I somehow configure my getstream faye subscription or config to (also) return foreign_idsfor the deleted items?
I could try to fetch those ids in addition based on the socket message, but this seems almost ridiculous.
Thank you very much.
Removing activities via foreign_id deletes all the activities with the given foreign_id present in the feed. This is one of the main upsides of using the foreign_id field, it allows for cascading deletes to a group of activities. (eg. Post and Likes is a typical use-case where you want to delete one Post and all Likes related to it).
Another advantage of using foreign_id is that you don't have to keep track of the ID generated by Stream.
You should be able to solve your first problem by picking a value for the foreign_id field that is unique (eg. the object ID from your database), this way you can still delete easily and avoid the cascaded delete behavior.
Regarding your second problem, if you are updating your UI based on the the real-time updates it also means you already read from the same feed, and that you have the list of activities with their IDs and foreign_ids. Selecting activities from activity_id should be just a matter of creating some sort of in memory map (eg. add an data-activity_id attribute to your DOM).

Mailchimp does not update the number of subscribers

I am using Mailchimp API in an application for sending weekly emails to users.
The flow is the following: at the begging the list of subscribers is empty, so I make a request to fill it. After filling it, I make another request to create a campaign associated with the list. But when to send it, it says that the campaign is not ready to be sent because the list of subscribers is empty. Indeed, in the Mailchimp web interface the number of subscribers in the list is zero, but when I enter the list, there are subscribers. After a while (several minutes), the number of subscribers gets updated and the campaign can be sent. And it is not about email confirmation because I'm not using Double Opt In.
Does anybody know what am I missing or how long must wait to get the numbers of subscribers to get updated and then to make the request for creating the campaign?
If it does matter I'm using Ruby on Rails and the Gibbon gem.
Here is the method for adding users to list
def add_subscriber_to_list data, list_id = default_list_id
Gibbon::Request.new.lists(list_id).members.create(body: data)
end
where data has the following format
{
email_address: 'email#test.com',
status: 'subscribed',
double_option: false,
merge_fields: {
FNAME: 'First name',
LNAME: 'Last name'
}
}

Cancel appointment and associated resources in Outlook when created using EWS Managed API

I am using the EWS Managed API to create appoitments on Exchange 2010.
Appointment appointment = new Appointment(exchangeService);
appointment.Subject = "Sample meeting";
appointment.Body = "Sample meeting body";
appointment.Start = bookingInfo.from;
appointment.End = bookingInfo.from.AddMinutes(bookingInfo.duration);
appointment.Location = meetingRoom.displayName;
appointment.Resources.Add(<my_room_mail>);
// Send the meeting request to all attendees and save a copy in the Sent Items folder.
appointment.Save(SendInvitationsMode.SendToAllAndSaveCopy);
This piece of code create effectively an appoitment in my Outlook but the Meeting Room included as a resource is marked as a "tentative" (not really accepted). So when I want to delete the meeting, the meeting room stay booked (busy/tentative) for the slot and it is impossible to delete the tentative.
If I delete the appoitment from the EWS code (using the appoitment ID), it works as expected, the room is effectively free.
Appointment appointment = Appointment.Bind(exchangeService, new ItemId(itemId));
appointment.Delete(DeleteMode.MoveToDeletedItems);
Do you have any idea of what is the problem ? Outlook right ? Bad appoitment creation or resource booking ?
Ok, I understand that Direct Booking is not compatible with EWS/OWA/Mobile solutions (and also with Outlook 2010/2013 without register tweak).
Direct Booking and Resource Booking Attendant (Auto Accept feature) are conflicting technologies, and if enabled together, unexpected behavior in calendar processing and item consistency can occur.
Check this for more details :
http://msexchangeanswers.blogspot.fr/2009/08/outlook-direct-booking-vs-auto-accept_17.html
http://exchangeserverinfo.net/2013/05/remove-auto-accept-from-outlook-for-all-room-mailbox/
The resource room needs to auto-accept the invitation, so it loses its tentative status. Then when you delete the appointment from your calendar, it should automatically send cancellation to the room. There is a setting on the delete to do this, and I forget off the top of my head if it's the default or not, but I think the initial issue is why the room is not configured to accept or reject the invite sent.

How would I design this scenario in Twilio?

I'm working on a YRS 2013 project and would like to use Twilio. I already have a Twilio account set up with over $100 worth of funds on it. I am working on a project which uses an external API and finds events near a location and date. The project is written in Ruby using Sinatra (which is going to be deployed to Heroku).
I am wondering whether you guys could guide me on how to approach this scenario: a user texts to the number of my Twilio account (the message would contain the location and date data), we process the body of that sms, and send back the results to the number that asked for them. I'm not sure where to start; for example if Twilio would handle some of that task or I would just use Twilio's API and do checking for smss and returning the results. I thinking about not using a database.
Could you guide me on how to approach this task?
I need to present the project on Friday; so I'm on a tight deadline! Thanks for our help.
They have some great documentation on how to do most of this.
When you receive a text you should parse it into the format you need
Put it into your existing project and when it returns the event or events in the area you need to check how long the string is due to a constraint that twilio has of restricting messages to 160 characters or less.
Ensure that you split the message elegantly and not in the middle of an event. If you were returned "Boston Celtics Game", "The Nut Cracker Play". you want to make sure that if both events cannot be put in one message that the first message says "Boston Celtics Game, Another text coming in 1 second" Or something similar.
In order to receive a text message from a mobile device, you'll have to expose an endpoint that is reachable by Twilio. Here is an example
class ReceiveTextController < ActionController
def index
# let's pretend that we've mapped this action to
# http://localhost:3000/sms in the routes.rb file
message_body = params["Body"]
from_number = params["From"]
SMSLogger.log_text_message from_number, message_body
end
end
In this example, the index action receives a POST from Twilio. It grabs the message body, and the phone number of the sender and logs it. Retrieving the information from the Twilio POST is as simple as looking at the params hash
{
"AccountSid"=>"asdf876a87f87a6sdf876876asd8f76a8sdf595asdD",
"Body"=> body,
"ToZip"=>"94949",
"FromState"=>"MI",
"ToCity"=>"NOVATO",
"SmsSid"=>"asd8676585a78sd5f548a64sd4f64a467sg4g858",
"ToState"=>"CA",
"To"=>"5555992673",
"ToCountry"=>"US",
"FromCountry"=>"US",
"SmsMessageSid"=>"hjk87h9j8k79hj8k7h97j7k9hj8k7",
"ApiVersion"=>"2008-08-01",
"FromCity"=>"GRAND RAPIDS",
"SmsStatus"=>"received",
"From"=>"5555992673",
"FromZip"=>"49507"
}
Source

Resources