What shall I do if the PIT in ElasticSearch is expired? - elasticsearch

I'm trying using the Point In Time API for pagination implementation and I got the exception message below while using an expired pit id:
pplication exception: ResponseError: search_phase_execution_exception
Root causes:
search_context_missing_exception: No search context found for id [129]
It always happens if some users open my web page and try scroll the list after a long time, for example, he/she leaves the tablet and comes back after two hours.
My question is what shall I do while the pit id is expired?
If the only thing I can do is that showing an error message on my web page?

you would need to track where the user was with this request and then make a new one from that page, Elasticsearch cannot handle this for you though

Related

Botframework Chat restarts after 30 minutes

I have a chatbot created using react webchat botframework. I understand after an hour the token expires and cannot be refreshed but my issue is that the conversation seems to restart after 30 minutes.
So i refresh the token within 1 hour and get a 200 response that the token is refreshed. If its within 30 minutes of the conversation original start then I can continue the conversation as expected. If it is over 30 minutes then I will see the conversation history but then it will restart from the beginning and I will see the first message.
Not sure where the issue lies or what information I can provide or how to troubleshoot so please let me know if you have any clue on how to fix this.
EDIT: Im wondering if this is possibly related to the userID. If I try to set the userID in the react component I get the message "connectSaga.js:58 Web Chat: user ID is both specified in the Direct Line token and passed in, will use the user ID from the token"
How does the user ID get set in the token? How can it be modified?
First, regarding the userID, it can be defined in either the request to the DirectLine service that generates the token or in the Web Chat rendering component as a property. As you have seen in the warning, Web Chat defaults to the userID defined with the token request. It does this for a few reasons, two being security and eliminating two differently defined userIDs from being utilized.
To 'bake' the userID into the token, it is sent in the request's body, as defined here. Here's an example:
{
"user": {
"id": "dl_abc123",
"name": "Steve",
"role": "user"
}
}
As for the conversation restarting, with Web Chat you don't need to manually refresh the token. Web Chat takes care of this for you by refreshing it every 15 mins. You can see this here in the BotFramework-DirectLineJS repo, which is a dependency of Web Chat. It's possible that your refreshing and Web Chat refreshing is somehow colliding. Disable/remove your refresh token implementation and try relying on Web Chat alone to take care of is. See if this makes a difference.
If it doesn't, then I would suggest you try implementing persistence in your Web Chat hosting page. This will allow you to reload the page or navigate away and come back without losing the conversation. You can follow the instructions in this SO response on how to set this up.

Custom google home action should always reconnect to get it working

i have a custom google home action implemented as described in the documentation (Oauth2 setup, sync, execute, etc... setup) and all works as expected on my Google home app and my google home physical devices.
Now, every now and then i need to reconnect the app in the Google Home App because it seems the app cannot reach my devices after some time. I checked my Oauth server if the refresh tokens are working ok and they do. Also my access token expires after 20 minutes and me reconnecting to the app should be done after some hours so the refreshing works in my opinion.
Now, are there any restrictions in using the TEST of the google home action?
The case i wrote is specific for personal use (intergration with personal server and domotica system) so i am actually not planning on releasing it, I just want to use it for myself. Is this allowed? Can i just leave my action in 'test' forever for such purposes?
EDIT: 18/05/2022: custom actions still working flawlessly after 6 months in test :-)
EDIT: 02/02/2023: custom actions still working flawlessly in test :-)
Additional question:
If i have to submit the app for release, i cannot meet the expectation in implementing state report as i have no control over the usage of buttons pressed at my home domotica. Is State Reporting also accepted when i report the state of my devices over time (let's say, every hour?)
tnx
EDIT:
So it seems there is something wrong with my refreshing of the tokens but i don't know what. When i try through postman, all works as expected, in stackdriver logs i see this :
jsonPayload: {
#type: "type.googleapis.com/google.identity.accountlinking.type.AccountLinkingError"
errorReason: "Failed to get response from 3P. 3P returned malformed response like invalid response code or un-inflatble body."
request: {
body: "grant_type=refresh_token&refresh_token=REDACTED_VALUE&client_id=qbusauth&client_secret=REDACTED_VALUE"
method: "POST"
uri: "https://******.azurewebsites.net/token"
}
sessionId: -1039956344
step: "REFRESH_ACCESS_TOKEN"
If you don't plan to submit your Action for release, you'll just need to occasionally re-enable device testing through the console.
To minimize the number of query intents to your fulfillment, you should implement Report State and proactively send device states to update HomeGraph. You would have to implement this if you decide to release your Action.

YouTube Data API v3 shows quota 0 out of 0

I'm using YouTube Data API v3 to retrieve a video info (title, description, thumb) when a user pasted the URL into my internal system. I started getting an 403 error about quota.
When I open Console Developers Dashboard, it shows 45 requests in the last 30 days (that system is not used all the time).
When I click to get an overview from that API, under Quotas, my queries per day shows 0 without possibility of change that limit.
I got a message at the top of the page to request more quota limit but when I follow that link the form tells my current limit is 0 and to require a new limit - which is 0! Can't proceed without any number greater than 0.
Does anyone knows if that is a bug?
This is intended behavior see issue #211012781
Hi. If you're seeing Queries per day quota set to 0 and the API is indeed enabled, then this means that your project’s access to YouTube Data API Service has been disabled.
You should’ve received a notice via email regarding this action, which also contains the steps that need to be taken to regain the project’s access. But just in case you missed it, please fill out and submit the exceptions form below:
https://support.google.com/youtube/contact/yt_api_form?hl=en

Polling Outlook mail folder (inbox) occasionally returns ErrorInvalidMailboxItemId

Something strange occurs that I cannot find the cause or reason for.
I have a loop which polls Inbox for an authorized user every minute. This goes fine for some time, but then I get 404 and error code is ErrorInvalidMailboxItemId (Item Id doesn't belong to the current mailbox.). I for example get this two times and then the polls starts working again.
GET /v1.0/me/mailFolders/xxx/messages?$filter=isRead%20ne%20true&$count=true&$top=10
Nothing that I can see is different between the polls, so I'm baffled why server suddenly returns 404.
Searching for this error mentions shared mailbox, archive and delegated, however this inbox is neither of these, and besides the error should then be consistent which it is not.
Same bearer token used for all the polls, both when it works, then does not and then when it starts working again.
Any ideas why this goes wrong? Or do I have to look for this error and then just retry or ignore the error for some time?
Thanks
I would try the following:
Do a re-try and see if it works
Implement the detailed response logging at the custom application end, isolate the item, make the same API call in MS Graph Explorer and see if it exists, gets the data or not.
Make sure you have necessary permissions to access the shared mailbox, archive mailboxes or the targeted mailboxes etc

EWS API - Error when recreating notification subscriptions

When working with pull subscriptions to Office365 calendar folders, I've been getting a lot of ErrorReadEventsFailed messages in the SendNotification request. This error essentially means that the subscription can no longer be found, and the server should no longer expect new notifications.
Checking Microsoft's recommended error handling, the solution is to use Autodiscover to rediscover the ExternalEwsUrl or EwsPartnerUrl, and create a new subscription.
With Office365, the AutoDiscovery service seems near impossible with a combination of OAuth2 service accounts so I've been using https://outlook.office365.com/EWS/Exchange.asmx as the main EWS endpoint.
However, when I try to create a new subscription for the specific calendar folder, I keep getting a generic 500 ErrorNoRespondingCASInDestinationSite error:
Exchange Web Services are not currently available for this request because none of the Client Access Servers in the destination site could process the request.
The strange part is this only happens directly after receiving the initial ErrorReadEventsFailed error. If I try again in, say, 30 seconds, the request goes through without a problem.
After doing some research, it seemed that most users found it helpful to ensure that the X-AnchorMailbox header was set properly for the user that the service account wishes to impersonate. I double-checked this header, and it is indeed being sent along the request to resubscribe.
This problem may be solvable by an exponential back-off solution, or by just retrying X amount of times until the request goes through. It seems to me that when the subscription gets "lost", the O365 service needs time to change the DNS of the Exchange server (it's the only thing I can think of).
Any help would be greatly appreciated!
Given the documentation at: https://msdn.microsoft.com/en-us/library/office/dn458788(v=exchg.150).aspx
When a subscription is lost, or is no longer accessible, it is best to create a new subscription and not include the old watermark in the new subscription. Resubscribing with the old watermark causes a linear scan for events, which is costly.
Instead, create a new subscription and compare folder properties to look for content changes that occurred between the lost subscription and the new subscription. The extended folder properties that we recommend that you check are PR_LOCAL_COMMIT_TIME_MAX (0x670a0040) and PR_DELETED_COUNT_TOTAL (0x670b0003).
You can do this by creating an extended property definition.
I think this may help you!!

Resources