Why am I getting constant time-out errors with Teams Shifts API? (Microsoft Graph) - microsoft-teams

When querying a schedule in Teams Shifts which has sizeable amounts of shifts (say, 100+ shifts over a month - which is like a 30 person team working every day) we are constantly getting 504 errors (gateway timeouts)
We've tried using TOP and limiting the amount of days we're returning to reduce the request size, but the GRAPH API for Shifts is VERY limited in terms of search and filtering capacity
REQUEST EXAMPLE (FROM MS FLOW, via custom connector).
{
"inputs": {
"host": {
"connection": {
"name": "#parameters('$connections')['shared_medicus365-5fconnector-5f455cfd1c6d1a89ed-5fce5269b428f1d481']['connectionId']"
}
},
"method": "get",
"path": "/beta/teams/#{encodeURIComponent(items('fe_team')?['TeamID'])}/schedule/shifts",
"queries": {
"$filter": "sharedShift/startDateTime ge #{outputs('composeStartOfDay')} and sharedShift/endDateTime le #{body('6monthsAhead')}",
"$top": "1000"
},
"authentication": "#parameters('$authentication')"
},
"metadata": {
"flowSystemMetadata": {
"swaggerOperationId": "ListTeamsShifts"
}
}
}
We're using the EXACT defined methods as described in the microsoft documentation.
We're getting a 504 response from GRAPH - gateway timeouts (again) any time the message size is anything beyond more than a few weeks worth of Shifts.
{
"error": {
"code": "UnknownError",
"message": "",
"innerError": {
"request-id": "437ad3be-be70-4bbe-b972-f9e24b588b5c",
"date": "2019-09-03T19:01:57"
}
}
}

Related

Problem with create team with Microsoft Graph Api

I have a problem with creating teams using the Microsoft Graph Api. I can get/create groups but when I try to get/create teams I get an error. I'm using postman and the group has owners and members, just as the documentation of MS, also has the permissitions it asks for groups. If somebody can help me, cause I look everywhere for a same error but no found it.
PUT https://graph.microsoft.com/v1.0/groups/{id}/team
Headers: Authorization: bearer token and content-type: json
Body is
{
"memberSettings": {
"allowCreateUpdateChannels": true
},
"messagingSettings": {
"allowUserEditMessages": true,
"allowUserDeleteMessages": true
},
"funSettings": {
"allowGiphy": true,
"giphyContentRating": "strict"
}
}
I always get the same error
{
"error": {
"code": "BadGateway",
"message": "Failed to execute backend request.",
"innerError": {
"request-id": "45eeba8a-9d35-45e8-b42e-c60da7a47dde",
"date": "2020-01-23T21:55:44"
}
}
}
According to the Graph API docs for this, you're not calling the correct endpoint to create a new Team. It should be
POST https://graph.microsoft.com/beta/teams
and a payload similar to
Content-Type: application/json
{
"template#odata.bind": "https://graph.microsoft.com/beta/teamsTemplates('standard')",
"displayName": "My Sample Team",
"description": "My Sample Team’s Description",
"owners#odata.bind": [
"https://graph.microsoft.com/beta/users('userId')"
]
}
Note that it's slightly different, as per the docs, whether you're using delegated versus application permissons.

Deploy a AUTOML NL Trainned Model Fails

I'm working with Google Auto ML Natural Language API.
I have already a trained model.
In the beginning, when I trained the model, it was deployed and everything was fine. According with Google's new rules from 22th January 2019, models that have no prediction traffic for 7 or more weeks will be silently undeployed and archived.
At moment, I can't predict any result with that trained model, because it is undeployed as it was probably unused for 7 weeks.
Also according to Google if id like to continue using the model I should redeploy it using the Deploy API.
https://cloud.google.com/natural-language/automl/docs/models#deploying_or_undeploying_a_model
I try to redeploy the model and I get an error so I can't make any prediction.
How can I deploy a model, without errors, in order to begin predict results?
So, I'll show the steps that I made to try to solve this problem:
Run deploy request with right data.
Run operations request
Wait for deployment to finish
Run operations request again
Show list of models that I have (It's UNDEPLOYED)
1
https://automl.googleapis.com/v1beta1/projects/{project}/locations/{location}/models/{Model ID}:deploy
{
"name": "projects/{project}/locations/{location}/operations/{Model ID}",
"metadata": {
"#type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
"createTime": {Time},
"updateTime": {Time}
}
}
2 and 4
https://automl.googleapis.com/v1beta1/projects/{project}/locations/{location}/operations
"operations": [
{
"name": "projects/{project}/locations/{location}/operations/{Model ID}",
"metadata": {
"#type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
"createTime": {Time},
"updateTime": "{Time},
"progressPercent": 100
},
"done": true,
"error": {
"code": 4
}
}
]
5
https://automl.googleapis.com/v1beta1/projects/{project}/locations/{location}/models
"model": [
{
"name": "projects/{project}/locations/{location}/models/{Model ID}",
"displayName": {name},
"datasetId": {dataset id},
"createTime": {time},
"deploymentState": "UNDEPLOYED",
"updateTime": {time},
"textClassificationModelMetadata": {}
}
]
So, I was expecting 0 errors in operations request, when the model finished the deployment progress, but it shows an error code 4. I have searched this error code 4 in this provided enum: https://github.com/googleapis/googleapis/blob/master/google/rpc/code.proto
For error code 4:
// The deadline expired before the operation could complete. For operations
// that change the state of the system, this error may be returned
// even if the operation has completed successfully. For example, a
// successful response from a server could have been delayed long
// enough for the deadline to expire.
//
// HTTP Mapping: 504 Gateway Timeout
DEADLINE_EXCEEDED = 4;
I don't know why this timeout is happening.
I already search in Quotas Limits, but everything is fine.
This problem should be resolved now. Sorry about the inconvenience, but your model should be deployable now. Please try and write back if you still see an issue.

When sending multiple messages using await context.PostAsync(reply), they are sometimes received out of order

When we send messages using the below code using the Directline channel, the messages are sometimes received with their order swapped.
await context.PostAsync(msg1);
await context.PostAsync(msg2);
Expected:
mgs1
msg2
But in some cases, they're coming through as
msg2
msg1
Is there any way to handle and prevent this?
I'm going to write this answer assuming you're using the Directline or REST API for receiving messages. I can update if that's not the case.
This entire answer is based off of the Receive activities from the bot docs as well as doing some testing of the Directline API to confirm.
If you're connected via WebSocket, you should always be receiving the messages in order, provided there isn't some kind of size difference in the messages (like one has an attachment) that requires additional processing.
If you're not, messages are retrieved via a polling interval, meaning that your client likely sends a GET request every 5 or 10 seconds (varies by client) to retrieve all messages that have not already been retrieved.
Upon doing so, the client will receive something like this:
{
"activities": [
{
"type": "message",
"channelId": "directline",
"conversation": {
"id": "abc123"
},
"id": "abc123|0000",
"from": {
"id": "user1"
},
"text": "hello"
},
{
"type": "message",
"channelId": "directline",
"conversation": {
"id": "abc123"
},
"id": "abc123|0001",
"from": {
"id": "bot1"
},
"text": "Nice to see you, user1!"
}
],
"watermark": "0001a-95"
}
My guess is that your client is just running a foreach on the array of activities, which could be displaying them out of order. If you have the client order them by either timestamp or id, it should work.

Data Factory Copy Activity met an internal service error

I have an ADF pipeline that copies 34 tables from an on premise Oracle database to an Azure data lake store; 32 of these copy just fine on a daily basis, the other 2 consistenly fail with...
Copy activity met an internal service error.
For more information, provide this message to customer support. ErrorCode: 8601 GatewayNodeName=XXXXXXXX,
ErrorCode=SystemErrorOdbcWrapperError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
Message=Unknown error from wrapper.,
Source=Microsoft.DataTransfer.ClientLibrary.Odbc.OdbcConnector,
''Type=Microsoft.DataTransfer.ClientLibrary.Odbc.Runtime.ValueException,Message=[DataSource.Error] The ODBC driver returned an invalid value.,Source=Microsoft.DataTransfer.ClientLibrary.Odbc.Wrapper,'.
The activity JSON is templated so is identical for all 34 activities. I can run the oracleReaderQuery in Oracle SQL Developer using the same connection details and credentials and get results.
Searches for this have shown 1 unanswered question on here (StackOverflow) and another Microsoft with a response that says "We will get back to you ASAP when we have new updates"....but there are no updates.
It seems I am not the only one having this issue; has anyone found a solution?
I have tried to do a one off copy in ADF but get the same result; I have tried copying the table to blob storage and get the same result.
Can anyone help me try to fathom what is wrong with this please?
The activity JSON is as follows...
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "OracleSource",
"oracleReaderQuery": "SELECT stuff FROM <source table>"
},
"sink": {
"type": "AzureDataLakeStoreSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
}
},
"inputs": [
{
"name": "<source table dataset>"
},
{
"name": "<scheduling dependency dataset>"
}
],
"outputs": [
{
"name": "<destination dataset>"
}
],
"policy": {
"timeout": "02:00:00",
"concurrency": 1,
"retry": 3,
"longRetry": 2,
"longRetryInterval": "03:00:00",
"executionPriorityOrder": "OldestFirst"
},
"scheduler": {
"frequency": "Day",
"interval": 1
},
"name": "Copy Activity 34",
"description": "copy activity"
}
As I said though, this is identical, apart from the table it is accessing, to the 32 activities that work perfectly fine.
What's the data type of stuff in your table?

Google Calendar API v3: FreeBusy request returning "The requested time range is too long."

It seems that Google Calendar's freeBusy method will not accept timeMin/timeMax ranges beyond two months or so. How am I supposed to find Free/Busy information for the range of the calendar between now and forever (or a distant point in the future)?
Request:
{
"items": [
{
"id": "MY_GMAIL_CALENDAR_ID"
}
],
"timeMin": "2015-09-19T00:00:00-04:00", // today
"timeMax": "2016-09-19T00:00:00-04:00" // 1 year from now
}
Response:
{
"error": {
"errors": [
{
"domain": "calendar",
"reason": "timeRangeTooLong",
"message": "The requested time range is too long.",
"locationType": "parameter",
"location": "timeMax"
}
],
"code": 400,
"message": "The requested time range is too long."
}
}
Currently the maximum time range for a free busy query is around 3 months, but that value is subject to change without warning. Instead, a best practice is to a use a reasonable small range (like one month) and execute multiple queries if you need to get free/busy information over a longer time period.

Resources