Overlapping times issue with 'findmeetingtimes' call in Microsoft Graph API - outlook

When attempting to complete a findmeetingtimes call with the Microsoft Graph API (as seen here: https://developer.microsoft.com/en-us/graph/docs/api-reference/beta/api/user_findmeetingtimes), I am experiencing an issue where asynchronously looping and completing the call is returning a 500 error only when two meetings are scheduled in the exact same time slot (i.e. two meetings are scheduled at exactly 4:30PM - 5:00PM. The exact error message in the error object that is returned is:
"Invalid value for arg:Overlaps are not supported within TimeSlots,
value:{"start":2017-05-10T20:00:00Z,"min":30} ↵Parameter name:
Overlaps are not supported within TimeSlots"
Does anyone have any suggestions for a fix/work around for this?
Thanks

This was a bug in our API. We've deployed a fix that should clear this up.

Related

ElasticSearch delete_by_query: Trying to create too many scroll contexts

I am trying to fix this error while running delete_by_query API on AWS ElasticSearch:
Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting.
After going through few posts I have some basic idea as to what scroll-context is, however, my knowledge of "open scroll contexts" still remains murky.
I have a few questions regarding my understanding:
Does it mean that the API will open up a scroll context for the specified time (scroll = some period) and after processing (within the stipulated scroll time), open up a new context?
If so, do the previously processed contexts remain open till the API terminates?
I have 4 java EC2 instances running each of will execute delete_by_query API. Can this also cause too many scroll contexts to remain open? or it's unrelated?
Please do shed some light if there's anything lacking.
Coming to fixing the error:
The straight forward solution would be to increase the search.max_open_scroll_context parameter, however it has negative side effects as mentioned in this document's tip/note section.
Are there any other solutions?
Can increasing the batch size per scroll help?
Edit: The EC2 instances (2 in east-1 and 2 in west-2) are running a spring application at a frequency of 1s (this frequency is deliberate and can't be changed due to some restrictions), listening to SQS (in corresponding regions) for messages, and delete_by_query will use act upon these messages (delete based on some parameter from the message received).
Note: The SQS has considerable amount of data coming in.

Exchange Web Service error : "Resources are unavailable. Try again later., Cannot seek a row."

When I'm trying to get email items from Exchange (Office 365) using EWS.
I'm trying to do that by chunks with 500 messages. Sometimes, when I'm call method findItem I'm getting error : "Resources are unavailable. Try again later., Cannot seek a row."
Googling didn't provide anythings. I don't understand what does it mean and how to solve this.
Thanks
I can't say as I've run into this specific error before, but when dealing with O365, you'll often encounter these kinds of "go away, come back later" messages, and will have to implement a retry mechanism. Reducing your chunk size might help as well, but the message does contain your next step, i.e. "Try again later." Now if the request never succeeds after retry, this might be a deeper issue, but it does sound like a transient error from what you've described.
The email being retrieved is within a folder that may have too many items. Retrieving such items usually returns similar errors. Splitting the folder such that each folder holds a max of 70,000 items may help.

SonarQube API Issue search is only returning 100 results

Utilizing SonarQube 5.1, I have been attempting to utilize the API search feature to gather all of the issues pertaining to my current project to display on a radiator. On the Web interface, SonarQube indicates there are 71 major issues and 161 minor issues.
Using this search string
https://sonarqube.url.com/api/issues/search?projectKeys=myproject'skey
I get back a response with exactly 100 results. When I process those results for only OPEN items, I get back a total of 55 issues. 36 major, 19 minor.
This is being achieved through a Powershell script that authenticates to the SonarQube server and passes in the query, then deserializes the response into an array I can process. (Counting major/minor issues)
With the background out of the way, the meat of my question is: Does anyone know why the responses I am receiving are locked at 100? In my research I saw others indicating a response to an issue search would be capped at 500 due to an outstanding bug. However the expected number of issues I am looking for is far below that number. The API's instructions indicate that it would return the first 10,000 issues. Is there a server side setting that restricts the output it will return to a search query?
Thanks in advance,
The web service docs show that 100 is the default value of the ps parameter. You can set the value higher, but it will still max out.
You might have noticed a "paging" element in the JSON response. You can use it to calculate how many pages of results there are and loop through them using the p parameter to specify page number.

Adding member to MailChimp with API not showing in the web gui before i add more

I using the API v3 and createing a list and then adding a member to it, then i going to the web gui https://usxx.admin.mailchimp.com/lists/ and there i dont see the member.
When i then createing a list with a new member the first member shows up in the first list and the new list shows up but no member in the web gui.
I have tried adding two members at the same time but still not showing up any of them, looks like when i adding list number 2 its shows up, some ideas?
Edit: Now after left it and waiting for answer here i reloaded the page with lists and then i got the member for the last list. Is there some delay?
I can confirm that as of this writing MailChimp seems to have a delay. Somewhere between 2 and 5 minutes after adding people to a list - they show up on the UX.
This said I've observed in my tests that the API seems to have updates immediately. My guess is that their UX caches results for a short duration.

Server Error upon joining many rooms in a short period of time

My application joins about 50 rooms for one user on one connection all at once. After a couple rooms successfully join I start to get a server error return on some of the rooms.
There error is always the same, here it is:
Error: Server Error
at Object.i.build (https://cdn.goinstant.net/v1/platform.min.js:4:7501)
at Connection._onResponse (https://cdn.goinstant.net/v1/platform.min.js:7:25694)
at Connection._onMessage (https://cdn.goinstant.net/v1/platform.min.js:7:28812)
at Connection._onMessage (https://cdn.goinstant.net/v1/platform.min.js:3:4965)
at r.e (https://cdn.goinstant.net/v1/platform.min.js:1:4595)
at r.emit (https://cdn.goinstant.net/v1/platform.min.js:2:6668)
at r.e (https://cdn.goinstant.net/v1/platform.min.js:1:4595)
at r.emit (https://cdn.goinstant.net/v1/platform.min.js:3:7482)
at r.onPacket (https://cdn.goinstant.net/v1/platform.min.js:3:14652)
at r.<anonymous> (https://cdn.goinstant.net/v1/platform.min.js:3:12614)
It's not isolated to any particular rooms, sometimes half of them pass, sometimes nearly all pass, but there are almost always a couple that break.
What I have found is if it's less than 10 rooms it won't break.
Is there any rate limiting on joining rooms that could be causing this? I'd rather not put a delay between each room join but I can if I need to.
Update: It definitely has to do with how fast I'm connecting to the rooms. Spacing them out by 1s each makes it work every time. I need to connect faster though, is there a fix for this?
Even a 100ms deplay seems to work.
This isn't a case of rate-limiting or anything along those lines. It's a bug and we are working to fix it as soon as we can. We'll update you here once we have a solution deployed. If you'd like for us to email you a notification directly, drop us a message via our contact form (https://goinstant.com/contact). Just make reference to this issue and I'll make sure a note is added to email you directly as soon as the fix goes live.
Sorry for any inconvenience this may be causing you.
Regards,
Thomas
Developer, GoInstant

Resources