I can't withdraw any of my CMs. I've got timeout error every time I try. I've tried multiple times for several days but result is the same. The result is the same using withdraw_all and withdraw commands.
Command:
ts-node C:/Users/admin/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts withdraw_all -e mainnet-beta -k Se4Gf2GsjgzZZyUfnhFVzVLcijsCXx5a9Erhsp5uoUF.json
Result:
wallet public key: Se4Gf2GsjgzZZyUfnhFVzVLcijsCXx5a9Erhsp5uoUF
(node:15300) ExperimentalWarning: stream/web is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
(node:15300) ExperimentalWarning: buffer.Blob is an experimental feature. This feature could change at any time
Total Number of Candy Machine Config Accounts to drain 2
1.98635488 SOL locked up in configs
WARNING: This command will drain ALL of the Candy Machine config accounts that are owned by your current KeyPair, this will break your Candy Machine if its still in use
Rejecting for timeout...
Timeout Error caught { timeout: true }
Withdraw has failed for config account FJXUQQ1LJjSgVN2ChC79kN1gN9e3BKbNpCkTVMTHR2CN Error: Timed out awaiting confirmation on transaction
Rejecting for timeout...
Timeout Error caught { timeout: true }
Withdraw has failed for config account HqxznK2VtoAMnaakA7YCr7Vok4Y6YnpmvNFVtQe9iwrK Error: Timed out awaiting confirmation on transaction
Congratulations, 0 config accounts have been successfully drained.
Now you kinda rich, please consider supporting Open Source developers.
Thanks for your time.
It is said in the docs that a withdraw is network intensive and therefore you should use a custom rpc.
So, withdrawing is a heavy task and thus, you should use a custom RPC or a one faster than the public RPC.
You can try using Quicknode.
But if you want a free and fast one, I recommend using the GenesysGo RPC
https://ssc-dao.genesysgo.net/ (Mainnet)
https://devnet.genesysgo.net/ (Devnet)
If none of them works, try increasing the transaction timeout settings:)
Related
I started hosting a Redis/Celery/Python(Dash) app on Heroku about 1 month ago. In this time it was working flawlessly, however in my latest update, my datastore credentials changed and somehow a new bug was introduced.
It happens every time I restart my dyno, as I receive the error below for about 5 minutes.
The error is:
kombu.exceptions.OperationalError: Error 8 connecting to ec2-44-208-193-34.compute-1.amazonaws.com:19130. EOF occurred in violation of protocol (_ssl.c:1129).
somehow, after about 5 minutes, the error resolves itself and the error disappears.
my code looks like
celery_app = Celery(
__name__,
broker = "rediss://:*#ec2-44-208-193-34.compute-1.amazonaws.com:19130/0",
backend = "rediss://:*#ec2-44-208-193-34.compute-1.amazonaws.com:19130/1",
broker_use_ssl = {
'ssl_cert_reqs': ssl.CERT_NONE
},
redis_backend_use_ssl = {
'ssl_cert_reqs': ssl.CERT_NONE
}
)
does anyone have insight to what might be causing it and how to prevent?
Check which Redis version your Heroku addon is running, there are changes with v6 where exhausting the max connection pool does not result in a "max number of clients reached" kind of error but a TLS one.
When a deploy happens, the existing dynos are using some baseline plus traffic count of connections and the replacement dynos try to pick up new ones but are rejected by Redis.
The reason it fixes itself after some period is Redis' timeout setting which is usually 300s by default. All the old dyno connections after 300 seconds are cleaned up and then all the TLS connection issues from being over the max clients goes away.
You can lower the timeout to reduce the duration the error occurs but the better fix is to either increase the max connections via the plan for Redis, or reduce the ones Celery is using (which is a complicated topic), hopefully this and this helpful.
Frustratingly, if you look at the Redis stats Heroku will not report that you went over the limit at all. It shows no indication that you suddenly tried to double or more the client connections. This is misleading as it's never really opened but outright rejected so quickly it is not accounted for. Worse, the error is poorly masked as a SSL issue when it is a resource exhaustion issue.
I cloned https://github.com/Learn-NEAR/sample--thanks and when I try to run the dev-deploy I am getting the below error. I am getting the same error for some other contracts too.
How can I fix this error?
BadRequestError: Error: nonce retries exceeded for transaction. This usually means there are too many parallel requests with the same access key.
Edit: Under the same conditions this error happens occasionally.
tl;dr: There is a limit on the number of accounts that can be created per second using dev-deploy, and when there is high demand, this error may occur. Retrying can help getting your account deployed.
When you run dev-deploy a new account is created for you in NEAR Testnet, by testnet account. To execute a deploy tx one of the Full Access Keys associated with this account is used.
The problem is that each key, has a nonce associated with it, and each tx executed with a key should have a nonce bigger than the previous nonce used for a tx. This means that you can't execute two transactions in parallel using the same nonce with the same key.
When several users try to creates accounts using dev-deploy on NEAR Testnet at the same time, the same nonce is used for different tx, and only the first one to be processed is included on chain.
I'm trying to handle Couchbase bootstrap failure gracefully and not fail the application startup. The idea is to use "Couchbase as a service", so that if I can't connect to it, I should still be able to return a degraded response. I've been able to somewhat achieve this by using the Couchbase async API; RxJava FTW.
Problem is, when the server is down, the Couchbase Java client goes crazy and keeps trying to connect to the server; from what I see, the class that does this is ConfigEndpoint and there's no limit to how many times it tries before giving up. This is flooding the logs with java.net.ConnectException: Connection refused errors. What I'd like, is for it to try a few times, and then stop.
Got any ideas that can help?
Edit:
Here's a sample app.
Steps to reproduce the problem:
svn export https://github.com/asarkar/spring/trunk/beer-demo.
From the beer-demo directory, run ./gradlew bootRun. Wait for the application to start up.
From another console, run curl -H "Accept: application/json" "http://localhost:8080/beers". The client request is going to timeout due to the failure to connect to Couchbase, but Couchbase client is going to flood the console continuously.
The reason we choose to have the client continue connecting is that Couchbase is typically deployed in high-availability clustered situations. Most people who run our SDK want it to keep trying to work. We do it pretty intelligently, I think, in that we do an exponential backoff and have tuneables so it's reasonable out of the box and can be adjusted to your environment.
As to what you're trying to do, one of the tuneables is related to retry. With adjustment of the timeout value and the retry, you can have the client referenceable by the application and simply fast fail if it can't service the request.
The other option is that we do have a way to let your application know what node would handle the request (or null if the bootstrap hasn't been done) and you can use this to implement circuit breaker like functionality. For a future release, we're looking to add circuit breakers directly to the SDK.
All of that said, these are not the normal path as the intent is that your Couchbase Cluster is up, running and accessible most of the time. Failures trigger failovers through auto-failover, which brings things back to availability. By design, Couchbase trades off some availability for consistency of data being accessed, with replica reads from exception handlers and other intentionally stale reads for you to buy into if you need them.
Hope that helps and glad to get any feedback on what you think we should do differently.
Solved this issue myself. The client I designed handles the following use cases:
The client startup must be resilient of CB failure/availability.
The client must not fail the request, but return a degraded response instead, if CB is not available.
The client must reconnect should a CB failover happens.
I've created a blog post here. I understand it's preferable to copy-paste rather than linking to an external URL, but the content is too big for an SO answer.
Start a separate thread and keep calling ping on it every 10 or 20 seconds, one CB is down ping will start failing, have a check like "if ping fails 5-6 times continuous then close all the CB connections/resources"
I'm trying to figure out how to test with braintree, and I'm running into what feels like a bandwidth error.
response = ::Braintree::Customer.create(payment_method_nonce: Braintree::Test::Nonce::Transactable)
token = response.customer.credit_card.first.token
#so far so good
response = ::Braintree::Transaction.sale(payment_method_token: token, amount: "1.00")
#still good
response = ::Braintree::Transaction.sale(payment_method_token: token, amount: "1.00")
#response is failure
# => Braintree::ErrorResult ... status: "gateway_rejected"
All that takes place without a pause.
If I wait a bit and run the sale line again it works again..
This of course sets up a problem with test scripts. I can moc-out the actual connection to BT, but I'm slightly worried about this. Should I be?
I work at Braintree. If you have more questions, you can always get in touch with our support team.
You can see what gateway_rejected means on the transaction statuses page of the API docs:
Gateway rejected
The gateway rejected the transaction because AVS, CVV, duplicate or fraud checks failed.
Transactions also have a gateway rejection reason, which in this case will be duplicate.
You can find more information about duplicate checking settings in the control panel docs:
Configure duplicate transaction checking
Duplicate transaction checking is enabled by default with a 30-second window in both the sandbox and production environments. These settings can be updated or disabled by users with Account Admin privileges.
Log into the Control Panel
Navigate to Settings > Processing > Duplicate Transaction Checking
Click Edit to adjust the time window or Enable/Disable to turn the feature on/off
Looks like it may be a rate-limit error. Search their help/docs/site about information related to rate limiting so you can know what the limits are and work around them.
However...if you're talking about testing as in automated tests - I would recommend not using external services in your test suite, and mocking out everything. Ideally you want your test suite to be able to run even when the network connection is down and you don't want it slowing down when 3rd party services are slow or when your network is slow.
If you really want to do a full integration test with all your 3rd party services, you can create a special set of tests that do that that are annotated with something like "#external", and then schedule them to run once a week or something just to flag some weird changes or errors.
I have a problem with starting my Windows Service. It's configured to start automatically and it usually does start. On sometimes it doesn't, especially on Windows 8.
The windows log contains following error:
The XYZ service failed to start due to the following error: The
service did not respond to the start or control request in a timely
fashion. A timeout was reached (30000 milliseconds) while waiting for
the XYZ service to connect.
This is a .NET 2.0 service.
The standard cause of the problem is OnStart method that perform long synchronous operation. This is not an issue this time. In fact, I've placed a file logger in the beginning of the OnStart method and it looks like it's not invoked at all.
It turned out that the problem was cause by two issues:
the executable file (exe) was signed digitally;
there were Internet connection problems and accruing IP took a long time;
The two combined caused the service start process to timeout due to too long certificate validation.
I had to use this on native c win32 services, and searched if .NET has something similar. Sorry if i'm wrong.
In your OnStart, use the RequestAdditionalTime method to inform the service control manager that the operation will require more time to complete. Documentation here