I am just finishing an upload of 8000 assets to candy machine (via the upload command). Everything seemed to be working well when it was creating the bundles and saving them to the cache, but once it started to write the indices I've started seeing these two errors on and off:
1)
Waiting 5 seconds to check Bundlr balance.
Requesting a withdrawal of 0.638239951 SOL from Bundlr...
Successfully withdrew 0.638244951 SOL.
Writing all indices in 719 transactions...
Progress: [█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░] 2% | 18/719Transaction simulation failed: Blockhash not found
Failed writing indices 3682-3691: Transaction was not confirmed in 60.01 seconds. It is unknown if it succeeded or fail.
I have been searching the internet and from what I can tell these errors are out of my control..is this correct? Or what can I do to get these indices to write successfully? Its at 50% progress right now but I assume the upload is not going to be successful when it finishes. If this is the case, do I need to run the candy machine upload command all over again or is there a way for me to just run the transaction portion (where it started to fail) again? I've seen some notes on retry but it wasn't completely clear to me.
The upload process took about 2.5 hours so would like to avoid that if at all possible.
Help is very much appreciated.
Both errors are common so you dont have to worry about it. You should use a custom RPC (using --rpc-url on the upload command) and wait till the upload command ends. When the upload command ends you have to use verify_upload command in order to see if everything went well (if verify_upload shows an error you have to run upload again and repet till verify_upload shows ready to deploy message).
Related
I am running some cypress (version 9.5.2) tests on a project and when I execute it on my computer there is no problem. Everything is alright
But when I run it on my gitlab runner with docker as executor, I got some errors that I don't understand.
For example I got this one:
During a cy.wait(30000); at the middle I got this error
CypressError: Timed out after waiting 60000ms for your remote page
to load. Your page did not fire its load event within 60000ms. You
can try increasing the pageLoadTimeout value in cypress.json to
wait longer. Browsers will not fire the load event until all
stylesheets and scripts are done downloading.
So I don't understand to what this error is related, have you an idea ? I can share a video if you got any idea
I have a Laravel App running on Azure Web App Linux service, all running nice and smoothly until I reach a feature that exports a query to an XLS for download. Then I receive the ERROR 502.
On my local environment works normally, I can export the query to XLS with no issues, it is not a large query, just a few rows.
In the same app, I have a function that exports to XLS just 1 row at a time and works fine, so it is just when I go for a larger(ish) query.
Any ideas? I have tried scaling up, restarting the app, apache, changed .ini (via .htaccess to increase execution time).
There is no trace in the logs either, there is something about the container crashing but cannot trace it to this particular error.
Ok, managed to solved it... was not straight forward at all. It has to do with the size of the query, even tough is not big by any means (a couple thousands max) raising memory limit to 1024M or further ended up in 502 Error. Decided to try different and moved from Laravel Excel to Fast-Excel which is less featured but man... it works. Now everything downloads perfectly. In case you are having this issue give fast-excel a try.
SDK: Apache Beam SDK for Go 0.5.0
We are running Apache Beam Go SDK jobs in Google Cloud Data Flow. They had been working fine until recently when they intermittently stopped working (no changes made to code or config). The error that occurs is:
Failed to retrieve staged files: failed to retrieve worker in 3 attempts: bad MD5 for /var/opt/google/staged/worker: ..., want ; bad MD5 for /var/opt/google/staged/worker: ..., want ;
(Note: It seems as if it's missing a second hash value in the error message message.)
As best I can guess there's something wrong with the worker - It seems to be trying to compare md5 hashes of the worker and missing one of the values? I don't know exactly what it's comparing to though.
Does anybody know what could be causing this issue?
The fix to this issue seems to have been to rebuild the worker_harness_container_image with the latest changes. I had tried this but I didn't have the latest release when I built it locally. After I pulled the latest from the Beam repo, and rebuilt the image (As per the notes here https://github.com/apache/beam/blob/master/sdks/CONTAINERS.md) and reran it seemed to work again.
I'm seeing the same thing. If I look into the Stackdriver logging I see this:
Handler for GET /v1.27/images/apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515/json returned error: No such image: apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515
However, I can pull the image just fine locally. Any ideas why Dataflow cannot pull.
I've uploaded a single file to Heroku that crawls a website and responds the edited content in JSON format on an http request. I now want to update the content regularly so the content stays up to date. I tried using the Heroku Scheduler however I am failing to schedule the process so that it runs correctly.
I have specified the following process in the Heroku Scheduler:
run phantomjs phantom.js //Using 1X Dyno, every hour.
//phantom.js is the file that contains my source code and that runs the server.
However if I enter
heroku ps
into the terminal, I only see one web dyne running and no scheduler task. Also if I type
heroku logs --ps scheduler.1
as described in the Scheduler documentation, there is no output.
What am I doing wrong? Any help would be appreciated.
For what it sounds like you want to accomplish, you need to be constantly running
1 Web Dyno
1 Background Worker
When your scheduled task executes, it will be run by the background worker. Which, since you haven't provisioned it, isn't executing.
Found it: I only had to write
phantomjs phantom.js
in order to get it working. It was the "run" that made the expression invalid
I have a decently large DB that I'm trying to pull down locally from heroku via db:pull.
I never can stick around my machine long enough to keep it from going to sleep, effectively killing the connection and terminating the process. GOTO 1.
I know I could change my system settings to stop my computer from sleeping, which would keep the connection alive, but is there a way to continue a previous pull?
Or maybe the solution is just not to use db:pull for a large db.
heroku db:pull supports resuming. When you start a pull it will create a .dat file in your project (and get rid of it when it's completed). You can do:
heroku db:pull --resume FILE # resume transfer described by a .dat file
to start the pull from the previous location.
Heroku pgbackups maybe a better option to grab the large Db file - http://devcenter.heroku.com/articles/pgbackups.
Although I'd be more inclined to prevent your computer from sleeping - just disable the sleep functionality during the downloading from settings/control panel depending on OS.