Efficiently check unread count on entire account - ruby

To my understanding, there is no way to query an entire IMAP account for a total unread count, or the UIDs of all recent messages, regardless of mailbox. That to get a total unread count for the account, you need to iterate over all mboxes and check their status. I've done that, but it's very slow (45 seconds on one of my accounts with many mailboxes).
Mail.app can find new messages, even in deeply nested mailboxes, in just a couple seconds.
Is the speed here just a limitation of using Net::IMAP? Or am I missing some functionality that will return a more limited set of mailboxes, like only ones that have RECENT messages?
The only other option I can think of to use response handlers, and also keep a cache of which mboxes have a counter > 1, and then only check the combination of the two each cycle. But since I'm looking to do this in a script, eliminating the need to carry over a cache would be ideal, if not required.

The canonical way to detect new messages in IMAP is via UIDNEXT. Issuing
A001 STATUS "foldername" (UIDVALIDITY UIDNEXT)
on each folder that you care about will give you the expected next UID for that folder. Here's what the RFC has to say:
Unique identifiers
are assigned in a strictly ascending fashion in the mailbox; as each
message is added to the mailbox it is assigned a higher UID than the
message(s) which were added previously. Unlike message sequence
numbers, unique identifiers are not necessarily contiguous.
The next unique identifier value is the predicted value that will be
assigned to a new message in the mailbox. Unless the unique
identifier validity also changes (see below), the next unique
identifier value MUST have the following two characteristics. First,
the next unique identifier value MUST NOT change unless new messages
are added to the mailbox; and second, the next unique identifier
value MUST change whenever new messages are added to the mailbox,
even if those new messages are subsequently expunged.
So just keep track of the each folder's expected next UID and UID validity value. If a STATUS command results in either UIDNEXT or UIDVALIDITY changing from your cached value, you know you need to check for new mail (if the former) or resync (if the latter).
Something like this:
imap.status("foldername", ["UIDNEXT", "UIDVALIDITY"])

Related

D365 Same Tracking Token was assigned to Email/Case in Customer Service

One customer had a problem where an incorrect email (from another customer) was assigned to a case. The incorrectly assigned email is a response to a case that was deleted. However, the current case has the same tracking token as the deleted one. It seems that the CRM system uses the same tracking token as soon as it is available again. This should not happen! Here Microsoft has a real programming error from our point of view. The only solution we see is to increase the number of numbers to the maximum so that it takes longer until all tracking tokens are used up. But in the end, you still reach the limit.
Is there another possibility or has Microsoft really made a big mistake in the way emails are allocated?
We also activated Smart Matching, but that didn't help in this case either, because the allocation was made via the Tracking Token first.
Thanks
The structure of the tracking token can be configured and is set to 3 digits by default. This means that as soon as 999 emails are reached, the tracking token starts again at 1, which is basically a thinking error on Microsoft's part.
If you have set "Automatic replies", these will be reached in the shortest possible time. We therefore had to increase the number to 9 digits, which is also not a 100% solution. At some point, this number of emails is also reached and then emails are again assigned to requests that do not belong together. Microsoft has to come up with another solution.

Exchange Server. Move Item operation. How to map new items to the moved ones?

In my mail app I'm moving messages between folders by using MoveItem operation. When you move messages their ids are changed. In the response I'm receiving the new message ids. But the old ones are missing. And this is a big problem.
I have no idea how to map a new message id to an old one and can't update messages in my database with the new ids. Seems like I don't understand something simple. What's the point of returning new ids if you have no idea what message each one belongs to?
Am I supposed to rely on the order of response messages? If so can you please give me a link to the corresponding piece of EWS documentation?
Or am I supposed to perform synchronization of mailboxes every time I move more than one message?
When you used MoveItems you would have passed in an array of ItemId's and what you get back as a result is an Array of objects.
The order of the items in the Response collection matches the order in your request so element 1 of the response represents the results of the element 1 request. So you can just map them this way.
However your response logic should be more complex to deal with issues where half of your request being executed okay while x% failed because of throttling etc (so check the response status of each request) or getting a 501 mid move where you could get into an unknown state.

What is the default value of the apns-expiration field?

The apns-expiration field governs how long Apple will hold on to an apns message before giving up on delivering it (for example, if the device is turned off).
According to their docs, a value of zero means "no retention": meaning that if the message can't be delivered immediately, its discarded.
But what happens if the header isn't specified? In other words, what is the default behavior?
My information isn't based on documentation but rather on stats gathered from a multi-million users system. The policy at this time is to retain push messages for a long time (exactly how long I dont know - we've seen 1M seconds retention in some cases). Of course, as this isn't documented it could change in the future.
Note that this default value is similar to Google's policy (where the default is 2419200 seconds), with the exception that Google's policy is documented.
https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/CommunicatingwithAPNs.html#//apple_ref/doc/uid/TP40008194-CH11-SW1
"If this value is nonzero, APNs stores the notification and tries to deliver it at least once, repeating the attempt as needed if it is unable to deliver the notification the first time."
Literally this means that the absence of the value equals to 0.

Google Calendar API : event update with the Ruby gem

I'm using https://github.com/google/google-api-ruby-client to connect to different google API in particular the Google Calendar one.
Creating an event, updating it and deleting it works most of the time with what one can usually find around.
The issue appears when one tries to update an event details after a previous update of the dates of the event.
In that case, the id provided is not enough and the request fails with an error :
SmhwCalendar::GoogleServiceException: Invalid sequence value. 400
Yet the documentation does not mention such things : https://developers.google.com/google-apps/calendar/v3/reference/calendars/update
The event documentation does describe the sequence attribute without saying much : https://developers.google.com/google-apps/calendar/v3/reference/events/update
What's needed to update an event ?
Is there specific attributes to keep track of when creating, updating events besides the event id ?
how is the ruby google api client handling those ?
I think my answer from Cannot Decrease the Sequence Number of an Event applies here too.
Sequence number must not decrease (and if you don't supply it, it's the same as if you supplied 0) + some operations (such as time changes) will bump the sequence number. Make sure to always work on the most recent copy of the event (the one that was provided in the response).
#luc answer is pretty correct yet here are some details.
Google API documentation is unclear about this (https://developers.google.com/google-apps/calendar/v3/reference/events/update).
You should consider that the first response contains a sequence number of 0.
The first update should contain that sequence number (alongside the title, and description etc ). The response to that request will contain an increment sequence number (1 in this case) that you should store and reuse on the next update.
While the first update would imply a sequence number of 0 (and work) if you don't pass any the second might still pass but the third will probably not (because it's expecting 1 as sequence).
So that attribute might appear optional but it is actually not at all optional.

Google App Engine: Message class using list properties for receivers

I have a message model and I want it to have several receivers, possibly a lot of them.
I would also like to be able to tell for each receiver if the message was viewed or not (read/unread). Also I would like a receiver to be able to delete the message.
The two possible solutions are the following, for each I have a Message model an User model.
For the first (using the ideas presented here http://www.google.com/events/io/2009/sessions/BuildingScalableComplexApps.html)
I have a MessageReceivers class which has a ListProperty containing the users that will receive the message and set the parent to the message. I query of this with messages = db.GqlQuery('SELECT __key__ FROM MessageReceivers WHERE receivers = :1', user) and the do a db.get([ key.parent() for key in messages ]).
The problem I have which this is that I'm not sure how to store the state of the message: whether it is read or not and a subsequent issue whether the user has new messages. An additional issue would be the overhead of deleting a message (would have to remove user from receivers list property)
For the second: I have a MessageReceiver for each receiver it has links to message and to user and also stores the state (read/unread).
Which of this two approached do you consider that it has a better performance? And in the case of the first do you have any suggestion on handling the status of the message.
I've implement first option in production. Drawback is that ListProperty is limited to 2500 entries if you use custom index. Shameless plug: See my blog bost http://bravenewmethod.wordpress.com/2011/03/23/developing-on-google-app-engine-for-production/
Read state storing. I did this by implementing an entity that stored unread messages up to few months back and then just assumed older ones read. Even simpler is to query the messages in date order, and store the last known message timestamp in entity and assume all older as read. I don't recommended keeping long history in entity with huge list property, because reading and storing such entities can get really slow.
Message deletion is expensive, no way around that.
If you need to store state per message, your best option is to write one entity per recipient, with read state (and anything else, such as flags, etcetera), rather than using the index relation pattern.

Resources