EWS Managed API with EXO - Folder.Emtpy against Purges folder - "The folder can't be emptied." - exchange-server

I'm trying to use EWS to clear out the Purges folder within Recoverable Items. I'm successfully binding to the purges folder and can search items to confirm I'm definitely looking at that folder.
When I try to use the Folder.Empty method, using 0 (HardDelete) or 1 (SoftDelete), and in either of those cases either with 0/1 or $true/$false for the second parameter (delete subfolders), I always get back "The folder can't be emptied.". If I supply 2 for the DeleteMode / 1st parameter, which means "MoveToDeletedItems", it successfully moves items back from Purges into the user-visible Deleted Items folder. I can then successfully empty that folder with Folder.Empty(0,0), and then the items are back into the Purges folder, where again I experience the same behavior.
Does anyone know if this is even possible ever? I've excluded the mailbox from a retention policy, then waited for DelayHoldApplied/DelayReleaseHoldApplied to become true (only the former did), and then did Set-Mailbox -RemoveDelayHoldApplied. I've also just set SingleItemRecoveryEnable to $false. I'm not sure if maybe the SingleItemRecoveryEnabled setting change is on the potential 240 minute delay, so I'll keep trying for the next 4 hours and post an update. Hoping somebody has some factual knowledge about whether I should even be able to, ever, use the Folder.Empty or Item.Delete methods on the Purges folder (or its items).
Thanks in advance.

Related

GA3 Event Push Neccesary fields in Request

I am trying to push a event towards GA3, mimicking an event done by a browser towards GA. From this Event I want to fill Custom Dimensions(visibile in the user explorer and relate them to a GA ID which has visited the website earlier). Could this be done without influencing website data too much? I want to enrich someone's data from an external source.
So far I cant seem to find the minimum fields which has to be in the event call for this to work. Ive got these so far:
v=1&
_v=j96d&
a=1620641575&
t=event&
_s=1&
sd=24-bit&
sr=2560x1440&
vp=510x1287&
je=0&_u=QACAAEAB~&
jid=&
gjid=&
_u=QACAAEAB~&
cid=GAID&
tid=UA-x&
_gid=GAID&
gtm=gtm&
z=355736517&
uip=1.2.3.4&
ea=x&
el=x&
ec=x&
ni=1&
cd1=GAID&
cd2=Companyx&
dl=https%3A%2F%2Fexample.nl%2F&
ul=nl-nl&
de=UTF-8&
dt=example&
cd3=CEO
So far the Custom dimension fields dont get overwritten with new values. Who knows which is missing or can share a list of neccesary fields and example values?
Ok, a few things:
CD value will be overwritten only if in GA this CD's scope is set to the user-level. Make sure it is.
You need to know the client id of the user. You can confirm that you're having the right CID by using the user explorer in GA interface unless you track it in a CD. It allows filtering by client id.
You want to make this hit non-interactional, otherwise you're inflating the session number since G will generate sessions for normal hits. non-interactional hit would have ni=1 among the params.
Wait. Scope calculations don't happen immediately in real-time. They happen later on. Give it two days and then check the results and re-conduct your experiment.
Use a throwaway/test/lower GA property to experiment. You don't want to affect the production data while not knowing exactly what you do.
There. A good use case for such an activity would be something like updating a life time value of existing users and wanting to enrich the data with it without waiting for all of them to come in. That's useful for targeting, attribution and more.
Thank you.
This is the case. all CD's are user Scoped.
This is the case, we are collecting them.
ni=1 is within the parameters of each event call.
There are so many parameters, which parameters are neccesary?
we are using a test property for this.
We also got he Bot filtering checked out:
Bot filtering
It's hard to test when the User Explorer has a delay of 2 days and we are still not sure which parameters to use and which not. Who could help on the parameter part? My only goal is to update de CD's on the person. Who knows which parameters need to be part of the event call?

How to retrieve EVERY mail message (including Deleted, Archived, and Recoverable items) for an O365 user with Graph API?

We are working on an application in the compliance/monitoring space where we are monitoring the activity of an individual. Because of this, we want to pull EVERYTHING in a user's Office 365 mailbox - if it has text the user wrote or received, we want it if it is there, even if it was deleted, purged, etc.
We are using the Graph API and have an existing implementation that uses the standard "messages" GET command:
GET https://graph.microsoft.com/v1.0/me/messages
We are making use of the GraphApiClient (Microsoft.Graph v1.9.0), so the code actually looks like this:
IUserMessagesCollectionPage pageOfMessages = _graphClient.Users[userId].Messages.Request(options).Top(batchSize).Expand("attachments").GetAsync().Result;
However, at the very least this does not return any items from any of the RecoverableItems folders. After looking into it, I am now suspicious that there might be other folders that are not returned by this command either. There is quite the list of Well-known folder names and I'm not sure what others might not be included in a generic Messages request.
Based on this post, I know you can request the messages in the missing folders by WellKnownFolderName one at a time like this:
GET https://graph.microsoft.com/v1.0/me/MailFolders/RecoverableItemsDeletions/messages
It even works with the GraphApiClient:
IMailFolderMessagesCollectionPage pageOfMessages = _graphClient.Users[userId].MailFolders["RecoverableItemsDeletions"].Messages.Request(options).Top(batchSize).Expand("attachments").GetAsync().Result;
The problems with this are:
I don't know how to build a comprehensive list of every folder that has messages for the user
Some of the folders (like RecoverableItemsDeletions and ArchiveRecoverableItemsDeletions, for example) can contain duplicates so I would need to use a dictionary to get rid of the duplicates
It would be a lot more expensive to first build a list of relevant folders and then request their contents and their childrens' contents one request at a time.
At scale, a folder-by-folder implementation could be subject to throttling (if we are monitoring enough users with big enough mailboxes)
Does anyone know the best way to do this? Thanks!

In Firebase Database, how to read a large list then get child updates, efficiently

I want to load a large list 5000 items, say, from a Firebase Database as fast as possible and then get any updates when a child is added or changed.
My approach so far is use ObserveSingleEvent to load the entire list at startup
pathRef.ObserveSingleEvent(DataEventType.Value, (snapshot) =>
and when that has finished use ObserveEvent for child changes and additions:
nuint observerHandle1 = pathRef.ObserveEvent(DataEventType.ChildChanged, (snapshot) =>
nuint observerHandle2 = pathRef.ObserveEvent(DataEventType.ChildAdded, (snapshot) =>
The problem is the ChildAdded event is triggered 5000 times, all of which are usually unnecessary and it takes a long time to complete.
(From the docs: "child_added is triggered once for each existing child and then again every time a new child is added to the specified path)
And I can't just ignore the first update for each child, because it might be a genuine change (eg database updated just after the ObserveSingleEvent completed).
So I have to check each item on the client and see if it has changed. This is all very slow and inefficient.
I've seen posts suggesting to use a query with LimitToLast etc, but this is not reliable as child changes may be missed.
My Plan B is to not do ObserveSingleEvent to load the list and just rely on the 5000 child added calls, but that seems crazy and is slower to get the list initially, and probably uses more of the user's data allowance.
I would think this is a common scenario, so what is the best solution?
Also, I have persistence enabled, so a couple of related questions:
1) If the entire list has been loaded before, does ObserveSingleEvent with Value just load from disk and there's no network use, when online (but no changes have happened since last download) or does it download the whole list again?
2) Similarly with the 5000 calls to child added - if it's all on disk, is there any network use, when online if no changes?
(I'm using C# for iOS, but I'm sure its the same issue for other platforms)
The wire traffic for a value listener and for a child_ listener is exactly the same. The events are purely a client-side interpretation of the underlying data.
If you need all children, consider using only the child_* listeners and no pathRef.ObserveSingleEvent(DataEventType.Value. It'll lead to the simplest code.

Ruby: Delete all S3 objects with prefix using one request

I currently do this to delete S3 objects in a folder / with a prefix:
require 'aws-sdk-s3'
bucket = Aws::S3::Resource.new.bucket('my-bucket')
bucket.objects(prefix: 'uploads/').map(&:delete)
This is probably slow for thousands of objects. I would prefer something like this to delete with one request:
bucket.delete(prefix: 'uploads/')
I can't find anything like that in the docs. Is such a thing possible?
You can delete at most 1000 objects in a single request. See this API call.
If you want to delete more than 1000 objects, you will have to issue more than 1 request.
There is no API call that allows you to delete all objects whose keys share a common prefix.
Another option is to define a rule under Object Lifecycle Management that says to expire (delete) objects in a given path (prefix) older than a certain number of days (eg one day).
This operation will automatically delete the files for no charge (whereas API calls to delete objects incur a charge). However, it might take 24 hours for the files to be deleted.

Automatically rebuild cache

I run a Symfony 1.4 project with very large amount of data. The main page and category pages are using pagers which need to know how much rows are available. I'm passing a query which contains joins to the pager which leads to a loading-time of 1 minute on these pages.
I configured cache.yml for the respective actions. But I think the workaround is insufficient and here are my assumptions:
Symfony rebuilds the cache within a single request which is made by a user. Let's call this user "cache-victim" to simplify things.
In our case, the data needs to be up-to-update - a lifetime of 10 minutes would be sufficient. Obviously, the cache won't be rebuilt, if no user is willing to be the "cache-victim" and therefore just cancels the request. Are these assumptions correct?
So, I came up with this idea:
Symfony should fake the http-request after rebuilding the cache. The new cache-entries should be written on a temporary file/directory and should be swapped with the previous cache-entries, as soon as cache rebuilding has finished.
Is this possible?
In my opinion, this is similar to the concept of double buffering.
Wouldn't it be silly, if there was a single "gpu-victim" in a multiplayer game who sees the screen building up line by line? (This is a lop-sided comparison, I know ... ;) )
Edit
There is no "cache-victim" - Every 10 minutes page reloading takes 1 minute for every user.
I think your problem is due to some missing or wrong indexes. I've a sf1.4 project for a large soccer site (i.e. 2M pages/day) and pagers aren't going so slow even if our database has more than 1M rows these days. Take a look at your query with EXPLAIN and check where it is going bad...
Sorry for necromancing (is there a badge for that?).
By configuring cache.yml you are just caching the view layer of your app (that is, css, js and html) for REQUESTS WITHOUT PARAMETERS. Navigating the pager obviously has a ?page=X on the GET request.
Taken from symfony 1.4 config.yml documentation:
An incoming request with GET parameters in the query string or submitted with the POST, PUT, or DELETE method will never be cached by symfony, regardless of the configuration. http://www.symfony-project.org/reference/1_4/en/09-Cache
What might help you is to cache the database results, but its a painful process on symfony/doctrine. Refer to:
http://www.symfony-project.org/more-with-symfony/1_4/en/08-Advanced-Doctrine-Usage#chapter_08_using_doctrine_result_caching
Edit:
This might help you as well:
http://www.zalas.eu/symfony-meets-apc-alternative-php-cache

Resources