Chainlink's dynamic upkeep registration example failing: UpkeepIDConsumerExample.registerAndPredictID errored: execution reverted - chainlink

I'm playing around with Chainlink's "Register an Upkeep using your own deployed contract" example: https://docs.chain.link/docs/chainlink-keepers/register-upkeep/#register-an-upkeep-using-your-own-deployed-contract
However, once the UpkeepIDConsumerExample is deployed with the Link Token Contact, Registry and Registrar parameters for the respective chain, I am unable to use the UpkeepIDConsumerExample.registerAndPredictID function as it fails.
(Gas estimation errored with the following message (see below). The transaction execution will likely fail. Do you want to force sending?
Internal JSON-RPC error. { "code": -32000, "message": "execution reverted" })
I've tried on Rinkeby, Mumbai and Polygon Mainnet, incase testnets weren't live yet. And I've used the parameters suggested by the docs for calling the function. And I have sufficient Link in my metamask.
Is it correct to use these: https://docs.chain.link/docs/link-token-contracts/ as the Link Token Interfrace parameter?
Thanks!

I was able to make this work (though I tried only on Goerli)using the code from the offical docs that you linked to.
For the benefit of others that read this post, I will break it down into detailed steps - perhaps more than you needed for an answer!
Prerequisites
Get some LINK tokens in your browser wallet
Deploy a Chainlink Keepers-compatible contract -- this is your Upkeep contract; the one that will be automated. Here is the example Upkeep smart contract that you can copy and deploy right away. You can use 10 as the interval -- that's 10 seconds. This way you can see the upkeep happen fast. Note this Upkeep's address
Next, deploy the UpkeepIDConsumerExample from the example in the docs, which is the smart contract that programmatically registers your Upkeep Contract. This contract handles registering the Upkeep Contract you deployed in Step #2 with Chainlink's Keepers network, so that the Keepers Network can automate the running of functions in your Upkeep contract. Note this Contracts Address
Making it work
From your wallet, which should now have LINK in it, send 5 LINK to the deployed UpkeepIDConsumerExample address. This is funding it will need to send onwards to your Upkeep (Upkeeps need funding so they can pay the Keepers Network for the compute work they do in performing the automations).
Using Remix, connect to the right network and then connect to your deployed UpkeepIDConsumerExample contract by using its address.
When Remix shows your contract and its interactions in the DEPLOYED CONTRACTS section of the UI, fill in the parameters for the registerAndPredictID() function using this table in the docs.
While following the table referred to above, please note:
upkeepContract is the Upkeep Contracts address - the one you deployed in Step 2 in Prerequisites
gasLimit - I used 3000000
adminAddress - this can just be your wallet address. The one that you're deployed from, sending LINK from etc.
Amount - 5 LINK expressed in Juels (LINK's equivalent of Wei), so 5000000000000000000
Sender - this is the UpkeepIDConsumerExample's address. In this example it's the calling contract itself.
run registerAndPredictID() with the params as per the previous step. It should run successfully.
Verify by going to the Keepers App and checking under "My Upkeeps" for a new Upkeep that you just programmatically created.
Cleanup
In the Keepers App note the LINK balance of the Upkeep you just created and funded with the 5 LINK -- it may be a bit less than the 5 LINK you sent it because the keepers network may have already run your Upkeep - we had set the interval for 10 seconds in Step 2 of Prerequisites.
And on Etherscan check whether UpkeepIDConsumerExample has any
LINK in it (it shouldn't because the 5 LINK you sent from your wallet to this contract, was transferred when you ran registerAndPredictID() and sent an amount of 5 LINK
Hope this helps!

Related

Alter 'status' request interval of CloudBuild submit

I'm trying to setup the CI/CD setup of a mono repository using Google Cloud Build. We have a single Cloud Build trigger that starts a build on a new commit, it does some general steps and then then starts a build for every (micro)service in the mono repository using gcloud build submit.
This however means that if 4 or 5 people are push code to the repository roughly at the same time we can have around 50-70 concurrent builds running in cloud build. Which in itself isn't an issue for us. The only issues is that when this happens the following errors will popup:
{
“code”: 429,
“message”: “Quota exceeded for quota metric ‘Build and Operation Get requests’ and limit ‘Build and Operation Get requests per minute’ of service ‘cloudbuild.googleapis.com’ for consumer ‘project_number:<PROJECT_NUMBER>’.“,
“status”: “RESOURCE_EXHAUSTED”,
“details”: [{
“#type”: “type.googleapis.com/google.rpc.ErrorInfo”,
“reason”: “RATE_LIMIT_EXCEEDED”,
“domain”: “googleapis.com”,
“metadata”: {
“service”: “cloudbuild.googleapis.com”,
“consumer”: “projects/<PROJECT_NUMBER>”,
“quota_limit”: “GetRequestsPerMinutePerProject”,
“quota_metric”: “cloudbuild.googleapis.com/get_requests”
}
}]
}
In other words: We are running into quota limits. The quota only allows us to only make 900 operational requests per minute.
We already tried switching to private pools in the hope that the above quota limit was only there for when you don't use private pools, but this unfortunately still makes us hit the quota.
Now, I am trying to find out if I can decrease the amount of these operational requests.
A possible solution might be related to how I am using gcloud build submit. When you run gcloud build submit, it starts a new build, waits for the build to finish, and shows the output of the build. To achieve this, I presume that gcloud is making requests every few seconds to find out what the status of the build is. I suspect that these 'status' requests are why my Cloud Build quota limit is reached. Which is why I'm trying to see how I can lower the amount of these requests per minute.
One option is to simple decrease the amount of builds running in parallel, which is unfortunately not an option in my situation. If I execute them sequentially it simply takes more time than acceptable in my situation.
Another option would be to increase the time in between such 'status' requests. However, on this page I did unfortunately not find a CLI flag to alter this.
Note: I did find the --async flag, however that does NOT help me, since I still want the process to wait until the build has succeeded. And I also did find the --supress-logs, which also does NOT help me, since these requests presumably don't interact with Cloud Build but with the GCS bucket where the logs are stored.
The only option left that I can think off, is that I can start my builds with the --async flag and then manually request whether the build has succeeded using a longer interval. However I do feel like that is a lot of manual work that, for which I need to write some bash scripts that need to be maintained. This preferably isn't a path I would like to take unless really necessary.
Does anyone know of another way of achieving this?
If 4 or 5 people are push code to the repository
This shouldn't happen. The reason it shouldn't happen is because you should use the "push" trigger on the main branch, not on a development branch.
What do I mean by this?
I mean that building should occur on the main branch, which would correspond to joined effort of those five users and a responsible party in charge of unifying their changes.
So, really, your users should be pushing to the development branch, and pushes to main should be reserved for things that need to be built.
How can we work around this if we're only allowed one branch or are required to have updates visible on one branch?
My recommendation would be to use the tag filter, specifically filter the pushes by tag, as mentioned in the documentation. That way only the pushes person in charge of merging the changes will be built (assuming that this person pushes to the tag you've set)
TL;DR
Don't create push triggers for Cloud Build on a branch multiple people are working on. Either create it with a tag filter or have seperate development and main branches (people work on dev, builds are only made from pushes to main)

3DSecure periodically timing out but taking payment

I am experiencing a very frustrating issue with SagePay Direct when a card payment initiates a 3DSecure challenge.
Customers are reporting either a hanging iFrame, or payment declined response. Whats worse is that in some instances, Sage takes the payment but the user is unaware of this and tries to buy again
Looking at my logs my code is working as expected and is loading the iFrame with the returned ACSURL as the src.
After searching the web, it appears it is a known issue with a timeout occurring on the secure merchant issuer that i hand off to.
The trouble i have is that i have no control of the response(or lack of) from the issuer as its in an iFrame.
Sage have not been very helpful with this problem only going as far as to say "we have heard of customers who experience this issue"
Does anyone have any experience of this problem and know how to resolve it? I guess the bottom line is to turn off the 3DSecure checks but this seems counter productive to the new EU ruling coming into force at some point.
Worth pointing out that this is only affecting a small percentage of my customer base and a lot of transactions are processing successfully (even with the password challenge) but the customers who experience problems are rightly shouting loudly.
anyone any ideas?
Thanks
We process up to 1000-2000 transactions daily via SagePay, using the Direct protocol. They are very cheap but their service is in all honesty fairly terrible. We have a single digit quantity of transactions every day that fail in this way. We've also got another provider and don't experience the same issues.
We have a routine job that asks the SagePay Reporting API about transactions that failed, to see what the current status is (did SagePay get the transaction? was it successfully authorised? etc). This API is utterly, utterly terrible and was a nightmare to integrate with, but it's useful as at least we can refund customers without having to log into the SagePay dashboard.
One thing that we discovered (that isn't documented anywhere on the SagePay site as far as I can tell) is that you're limited to one transaction at a time, or around 20-30 transactions per minute by default. If you go over this (a temporary peak or whatever) your transactions queue up and are delayed. If it gets really busy it completely falls over, and takes a while to recover. We had to switch SagePay off entirely for a few hours due to this (we've got backups in place).
Anyway, so it turns out our transactions were all being processed on one TID (short for Terminal ID). This is akin to a physical card terminal in a shop which can only process one transaction at a time. We asked SagePay support for more and we now have 10-15.
I hope this helps you. I'd recommend implementing a fallback payment supplier in case SagePay fails. A year or two ago they had a 3 day(!!!!) outage which was fairly devastating for us. We now take this seriously!
We've recently had an increase in what I believe may be the same thing. Basically the customer would be sent off to the 3ds page, then returned to the callback page, but for reasons I can't explain the PHP session wouldn't reestablish. The POST response to the callback page was enough to identify the order and complete it (as we'd taken payment), but the customer would then subsequently be prompted to log in again - they'd then see their basket as still having products in and place a second order (that would go through successfully).
After many hours debugging and making changes I managed to replicate this on a development server whilst using mobile emulation...
Long story short, what I have done is to add:
session_regenerate_id();
When I perform the initial vsp register CURL (this is the CURL where you get given the ACSURL). So far, this seems to be enough to ensure that the session gets reestablished when the customer returns to the callback page.

Google's RuntimeConfig API responds with 'Our systems have detected unusual traffic from your computer network'

Since today (november 20 2018) we get error responses from Google's RuntimeConfig API:
Our systems have detected unusual traffic from your computer network. This page checks to see if it's really you sending the requests, and not a robot...
(check this link for complete HTML error)
We retrieve variables from Google's RuntimeConfig using the API in our code. We do quite a few request, but not more than before:
A developer starts his server locally, which retrieves all the needed variables (+- 30 everytime you start).
Requesting RuntimeConfig variables via GCloud results in the same HTML error:
gcloud beta runtime-config configs variables get-value databaseHost --config-name database --project=your-test-environment
Other gcloud api requests work (projects describe, gsutil, etc).
How can I verify if I violated any terms? I can only find a usage limit in GCloud Console of 6000 calls per minute.
You can find the quotas for Runtime Configurator and how much of those you are using in the Cloud Console under IAM & Admin. In the Quotas section you can filter on Service = Cloud Runtime Configuration API and you should see all the quotas and how close to those you are for this API. There are 4 quotas that may affect you (docs here):
1200 Queries Per Minute (QPM) for delete, create, and update requests
600 QPM for watch requests
6000 QPM for get and list requests.
4MB of data per project, which consists of all data written to the Runtime Configurator service and accompanying metadata.
We had the exact same issue on November 20th when a large amount of our preemptibles were reallocated at the same time.
Our startup-scripts make use of the gcloud beta runtime-config...-commands and they all responded with 503.
These commands responded correctly again after a few hours.
We have had a support-ticket with Google and there was a problem with their internal quota mechanisms at the time which since is fixed so the issue is resolved.

Azure Graph API does not return newly updated data

Precondition:
Admin of Azure AD goes to azure portal to change/update user data
such as First Name from Test 1 to Test 2.
Call graph api: https://graph.windows.net/tenant/users?api-version=1.6 immediately.
Do nothing and wait for about 20-30s then call above graph api again.
Actual:
On step 2, the api returns user's First Name : Test 1
On step 3, the api returns user's First Name : Test 2
My question is why azure does not return newly updated data on step 2 and how to bypass and immediately get the newly data after updating from azure portal.
Azure AD is a large behemoth of a system. There are multiple data centers around the world, each with copies of the data; and in order to ensure that we give you the absolute best performance we may route different calls through different sources to different data centers.
My guess is that because you are using one tool to do the update, and another tool to do the read, you are seeing propagation delay between the actual source of authority for those two systems at the time of the call.
If you made the update and read call with using the same service, I believe you would not see this problem.

is this braintree testing multi purchase error something I should worry about?

I'm trying to figure out how to test with braintree, and I'm running into what feels like a bandwidth error.
response = ::Braintree::Customer.create(payment_method_nonce: Braintree::Test::Nonce::Transactable)
token = response.customer.credit_card.first.token
#so far so good
response = ::Braintree::Transaction.sale(payment_method_token: token, amount: "1.00")
#still good
response = ::Braintree::Transaction.sale(payment_method_token: token, amount: "1.00")
#response is failure
# => Braintree::ErrorResult ... status: "gateway_rejected"
All that takes place without a pause.
If I wait a bit and run the sale line again it works again..
This of course sets up a problem with test scripts. I can moc-out the actual connection to BT, but I'm slightly worried about this. Should I be?
I work at Braintree. If you have more questions, you can always get in touch with our support team.
You can see what gateway_rejected means on the transaction statuses page of the API docs:
Gateway rejected
The gateway rejected the transaction because AVS, CVV, duplicate or fraud checks failed.
Transactions also have a gateway rejection reason, which in this case will be duplicate.
You can find more information about duplicate checking settings in the control panel docs:
Configure duplicate transaction checking
Duplicate transaction checking is enabled by default with a 30-second window in both the sandbox and production environments. These settings can be updated or disabled by users with Account Admin privileges.
Log into the Control Panel
Navigate to Settings > Processing > Duplicate Transaction Checking
Click Edit to adjust the time window or Enable/Disable to turn the feature on/off
Looks like it may be a rate-limit error. Search their help/docs/site about information related to rate limiting so you can know what the limits are and work around them.
However...if you're talking about testing as in automated tests - I would recommend not using external services in your test suite, and mocking out everything. Ideally you want your test suite to be able to run even when the network connection is down and you don't want it slowing down when 3rd party services are slow or when your network is slow.
If you really want to do a full integration test with all your 3rd party services, you can create a special set of tests that do that that are annotated with something like "#external", and then schedule them to run once a week or something just to flag some weird changes or errors.

Resources