Outlook REST API 410 Error: SyncStateNotFound - outlook

I get a 410 when syncing messages:
{\"code\":\"SyncStateNotFound\",\"message\":\"The sync state
generation is not found; generation=1;[highest=4][4][2][3].\"}
This only occurs when syncing messages for select mailfolders on select accounts. It occurs when making a post-initial sync using the relevant delta token. I can recreate this via making
GET https://outlook.office365.com/api/v2.0/me/MailFolders('{folder_id}')/messages/?$deltaToken={delta_token}
In Microsoft's Outlook Sandbox.
Here are the literal steps it takes to reproduce deterministically:
1) Initial Message Sync:
GET https://outlook.office365.com/api/v2.0/me/MailFolders('{folder_id}')/messages
2) Sync with initial delta token:
GET https://outlook.office365.com/api/v2.0/me/MailFolders('{folder_id}')/messages/?$deltaToken={delta_token}
3) Sync with skip token until delta token:
GET https://outlook.office365.com/api/v2.0/me/MailFolders('{folder_id}')/messages/?$skipToken={skip_token}
4) ERROR OCCURS HERE: Mailfolder receives update, so I re-sync messages with delta token from (3). The call below throws a 410 and I can't sync messages.
GET https://outlook.office365.com/api/v2.0/me/MailFolders('{folder_id}')/messages/?$deltaToken={delta_token}
To reiterate: I've isolated this to just testing in the Outlook sandbox, and it still occurs. Testing as in making the GET call to sync (i.e., make perform (2)) using the deltaToken from (3) and its corresponding folderId as query parameters.

Dumb Mistake: Passed in initial delta token as opposed to current.

Related

Google.apis returns error code 400 after creating maximum amount of service account keys

We are using Google.apis Version 1.36.1 SDK in order to create service account keys for GCP Service accounts.
When we reach maximum amount of keys (10) instead of getting a valid error message / error code we recieve a general 400 error code with a "Precondition check failed." message.
We used to get error code 429 indicating we have reached maximum amount of keys.
Current GoogleApiException object :
Google.GoogleApiException: Google.Apis.Requests.RequestError
Precondition check failed. [400]
Errors [
Message[Precondition check failed.] Location[ - ] Reason[failedPrecondition] Domain[global]
]
The current return code does not provide us with enough information, Is there any other way for us to know the reason of the failure ?
This error message is also related to limits. You can take the official documentation for the Classroom API as an example.
I have found myself in a similar situation where we were deleting service account keys to immediately create new ones. We were getting the same error because there is a delay on the system where it can take from 60-90 seconds to delete the key for you to be able to create it again.

How do you transfer tokens from the lockup contract using the CLI?

If tokens were locked using a lockup contract (docs) what is the process for using the CLI to transfer those tokens once they have reached their unlocking time, either fully or partially?
Before the lockup is fully unlocked, you can call a method transfer on the contract, for example:
near call <yourlockup>.lockup.near transfer '{"amount": "1000000000000000000000000000", "receiver_id": "<receiver_account_id>"}' --accountId=<youraccount> --networkId=mainnet --nodeUrl=https://rpc.mainnet.near.org --gas=200000000000000 --useLedgerKey
Once it is fully unlocked, you can also add a full access key by invoking add_full_access_key with {"new_public_key": "<base58 key>"}, and convert your lockup account into a regular account.
There is a step by step guide here, also includes steps related to stake/unstake:
https://github.com/near/core-contracts/tree/master/lockup#staking-flow
The key steps after unstaking and withdrawing are:
near call lockup1 refresh_staking_pool_balance '{}' --accountId=owner1 --gas=75000000000000
near view lockup1 get_liquid_owners_balance '{}'
near call lockup1 check_transfers_vote '{}' --accountId=owner1 --gas=75000000000000
near call lockup1 transfer '{"amount": "10000000000000000000000000", "receiver_id": "owner-sub-account"}' --accountId=owner1 --gas=50000000000000

In Substrate what does code: 1012 "Transaction is temporarily banned" mean?

The full text of the message is :
{code: 1012, message: "Transaction is temporarily banned"}
This would indicate that the transaction is held somewhere in Substrate Runtime mempool or something of that nature, but it is not entirely clear what possible causes can trigger this, and what the eventual outcome might be.
For example,
1) is it that too many transactions have been sent from a given account, IP address or other? Has some threshold been reached?
2) is the transaction actually invalid, or not?
3) The use of the word "temporary" suggests a delay in processing, not an outright rejection of the transaction. Therefore does this suggest that the transaction is valid, but delayed? If so, for how long?
The comments in the substrate runtime core/rpc/src/author/errors.rs and core/transaction-pool/graph/src/errors.rs is no clearer about what is the outcome.
In front of the mempool, exists a transaction blacklist, which can trigger this error. Specifically, this error means that a transaction with the same hash was either:
Part of recently mined block
Detected as invalid during block production and removed from the pool.
Additionally, this error can occur when:
The transaction reaches it's longevity, i.e. is not mined for TransactionValidation::longevity blocks after being imported to the pool.
By default longevity is set to u64::max so this normally should not be the problem.
In any case -ltxpool=log should reveal more details around this error.
A transaction is only temporarily banned because it will be removed from the blacklist when either:
30 minutes pass
There are more than 4,000 transactions on the blacklist
Check out core/transaction-pool/graph/src/rotator.rs.

FB Messenger API - Receiving double requests

I have a working FB Bot built with Ruby which allows players to play a scavenger hunt.
Sometimes though, when I have multiple players in a team, FB is sending me a players 'Answer' webhook twice. I have looked into it and at first thought it was to do with the 20 second timeout if FB gets no 200 OK response (Docs here). After checking the logs though, I am receiving the second webhook from FB only 14 seconds later. See below:
# Webhook #1
{"object"=>"page", "entry"=>[{"id"=>"252445748474312", "time"=>1532153642358, "messaging"=>[{"sender"=>{"id"=>"1709242109154907"}, "recipient"=>{"id"=>"252445748474312"}, "timestamp"=>1532153641935, "message"=>{"mid"=>"0FeOChulGjuPgg3YJqEgajNsY8kMfNRt_bpIdeegEeE54h-KB8szcd-EQ-UHUT3850RwHgH4TxVYFkoFwxqhtg", "seq"=>402953, "text"=>"Larrikins"}}]}]}
# Webhook #2 (14 seconds later)
{"object"=>"page", "entry"=>[{"id"=>"252445748474312", "time"=>1532153656901, "messaging"=>[{"sender"=>{"id"=>"1709242109154907"}, "recipient"=>{"id"=>"252445748474312"}, "timestamp"=>1532153641935, "message"=>{"mid"=>"0FeOChulGjuPgg3YJqEgajNsY8kMfNRt_bpIdeegEeE54h-KB8szcd-EQ-UHUT3850RwHgH4TxVYFkoFwxqhtg", "seq"=>402953, "text"=>"Larrikins"}}]}]}
Notice both are exactly the same apart from the first "time" attribute (14 secs later).
Due to a number of methods and calls that I process after receiving the first webhook, the 200 OK response is only being sent back to FB once I have finished sending my messages in response (hence the 14 second delay).
So I have two questions:
Is the 14 second delay too long and that is why FB is resending? If so, how can I send a 200OK response straight away (head :ok)?
Is it another issue entirely?
You also ensure that "Echo" is disabled.
Go to Settings>Webhooks, edit events.
Asyncronous language like NodeJS is recomended, in my case y work with AWS SQS, I have workers that process the requests witout blocking (dont wait), I return 200,"ok" to FB to avoid that FB send again the message to my webhook.
Anothe apporach maybe store the mid in database, and check in each request if the mid exists, if exists the dont process the message. I was use Dynamo DB (AWS) with TTL enabled, thus with TTL my database autoclean every hour erasing old request.
I think it is the 15 second wait before replying, was also happening to me as Facebook auto retries when you don't reply fast enough. Te EEe Te's idea is solid, write some mechanism to cache mids and check if it is a duplicate before processing

Outlook REST API 500 LegacyPagingToken error

I am using the Microsoft Outlook REST API to synchronize messages in a folder using skipTokens with the Prefer: odata.track-changes header.
After 62 successful rounds of results, I get an error 500 ErrorInternalServerError with the message Unable to cast object of type 'LegacyPagingToken' to type 'Microsoft.Exchange.Services.OData.Model.SkipToken'
I have tried:
Retrying the same query (https://outlook.office.com/api/v2.0/me/MailFolders/Inbox/messages/?%24skipToken=1BWUA9eXs5dN89tPsr_FOvtzINQAA0Cwk5o), which results in the same error
Restarting the sync, which results in the same error at the same point
Adding a new message to the Inbox and restarting the sync, which results in the same error at the same point
Moving the messages from that part of the sync to another folder (in case the messages themselves were causing the problem), which results in the same error at the same point
Has anybody run into this error or have suggestions on what might cause it or workarounds?
It looks like the issue was on my end while parsing the skipToken from the #odata.nextLink response. The token in the original question is invalid - the actual skipToken passed back from the API had -AAAA on the end. After 63 queries, in which the skipToken increments, the Base64 encoded form started using characters the regexp I was using didn't find. Switching from a \w regexp to a proper URL parser solved the problem.

Resources