When I deploy a Solana program to devnet it works fine.
However, when I try to deploy the same program to production I get the following error:
Error: Deploying program failed: Error processing Instruction 1: custom program error: 0x1
There was a problem deploying: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "" }.
The command I am using is:
solana -k admin_key.json -u mainnet-beta program deploy target/deploy/pixels.so
This command works fine if I swap mainnet-beta with devnet.
It's worth noting that I can deploy to production (and I have) using:
solana -k admin_key.json -u mainnet-beta deploy target/deploy/pixels.so
Does anyone understand why the discrepancy between devnet and mainnet here?
Here's a link to the currently deployed program on main net:
https://explorer.solana.com/address/JBAnZXrD67jvzkWGgZPVP3C6XB7Nd7s1Bj7LXvLjrPQA
This was deployed using solana [...] deploy (versus the modern way of solana [...] program deploy).
You can see an example of a program deployed the modern way to dev net here:
https://explorer.solana.com/address/6uCCuJaQSQYGx4NwpDtZRyxyUvDMUJaVG1L6CmowgSTx?cluster=devnet
Error 0x1 typically means that there isn't enough SOL in the payer key to cover the deployment. You'll need to check that you have SOL on those keys on mainnet to properly do the deployment.
Related
I am currently trying to host a typescript/sequelize project in Google cloud build.
I am connecting through a unix socket and cloud sql proxy.
The app is deployed and a test running "sequelize.authenticate()" seems to be working.
Migrations to localhost seems to be working aswell.
I have written a cloud build trigger that does the following:
-builds a simple docker image
-pushes the simple docker image
-npm install
-downloads the cloud_sql_proxy
-initiates the cloud_sql_proxy
the next step would be to migrate a simple table to my gcloud database.
please check out my drawing for further details: https://excalidraw.com/#json=LnvpSjngbk7h1F0RzBgUP,HPwtVWgh-sFgrmvfU9JK0A
If i try to run "npx sequelize-cli db:migrate" gcloud gives the following message: [31mERROR:[39m connect ENOENT /cloudsql/xxxxxxx/.s.PGSQL.5432
but if i replace the command with npx sequelize-cli --version, it simply prints out the version and moves on with the rest of the trigger operations.
I am trying to deploy on Google Cloud Platform for the first time using the following two tutorials:
Gcloud build quickstart
Gcloud deploy quickstart
However, when running the final command gcloud builds submit --config cloudbuild.yaml, where cloudbuild.yaml is the name of the yaml file as per tutorial, throws the following error:
Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
The image created by the build quickstart is not appropriate for the deploy quickstart. The latter, using Cloud Run needs something talking HTTP on port 8080.
If you using the deploy quickstart as-is, that should work. You can test this container image locally using:
docker run \
--interactive --tty \
--publish=8080:8080 \
gcr.io/gcbdocs/hello
and then try browsing or curling the endpoint http://localhost:8080. You should see Hello world!.
The error message from Cloud Run is somewhat generic and means that something went wrong. As a result it's often unhelpful.
If you're confident you're deploying a container image that talks HTTP on port 8080, I recommend you step through the instructions to try to see where you went wrong.
So, I have a chaincode application based on the fabcar sample from fabric-samples. Yesterday, I was able to bring it up and run the initLedger function, but found issues with my chaincode when running further invocations. However, when i brought the network back up after debugging (which turned out to be a nightmare in Go), I can no longer get the InitLedger to execute, it just gives me the following error:
Error: endorsement failure during invoke. response: status:500 message:"error in simulation: failed to execute transaction f2589dd7849c01064d5ed827867085d02615ac4fe4d5edcaed31b1a7d5635c94: could not launch chaincode
fakenews_1.0:230eafea48b912ae8f96bfc79bea3b02b4538992547e9de284f80c66a1f52550: error starting container: error starting container: API error (400): OCI runtime create failed: container_linux.go:349: starting container process caused \"exec: \\\"chaincode\\\": executable file not found in $PATH\": unknown"
As far as I can tell, this is due to an issue with docker, but I can't really figure out how to solve it in my case. Has anyone ran into this before?
For extra information, the main difference between the fabcar chaincode and mine is that I am reading from a JSON file and mapping that to a list of objects which are then put on my blockchain. At least that's what I'm trying to do, because the one time I managed to run the InitLedger, my QueryAll invocation came up empty.
Check the chaincode path. I think your chaincode path is wrong.
Lets say you are mapping chaincode directory like below in compose file.
volumes:
../chaincode:/opt/gopath/src/github.com/chaincode
then export CC_SRC_PATH=/opt/gopath/src/github.com/chaincode
use above path when executing invoke command. You can add -p ${CC_SRC_PATH}.
peer chaincode install -n mycc -v ${VERSION} -l ${LANGUAGE} -p ${CC_SRC_PATH} >&log.txt
NOTE: I'm assuming that you are running all these command inside cli container
I am getting an timeout error when trying to deploy to an VM instance hosted on AWS. Manually I can log ing using
ssh -i myKeyFile.pem myuser#IP
Once I accessed the remote machine I can execute some docker commands and everything works fine. But now that I need to automated that on the CD pipeline is where I am getting the following error:
2020-06-02T21:37:12.6877276Z Trying to establish an SSH connection to ***#IP:port
2020-06-02T21:38:52.4629461Z ##[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake.
2020-06-02T21:38:52.4685976Z ##[section]Finishing: Run shell commands on remote machine
The steps I follow to make the SSH connection are:
I created a SSH service connection on the project settings in Azure DevOps
I created the CD pipeline
I added a SSH task with the following parameters
When I manually trigger it to test if it works, the release start working fine but after 1:43 minutes more or less is when I got the error:
Then, when I review the logs, it is the same error I pasted at the beginning:
[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake
I've increase the handshake timeout settings from the default one (20000) to 90000, but no luck.
Any one has face this problem before?
Seems there is an ongoing error with the default agent pools from Azure DevOps. Lot of people have been reported this and Azure DevOps teams is working on it at the time this post is been written (I couldn't find the post where all that is details. I will add this later on).
The workaround is
To create a self-hosted agent.
After this has been created you will need to re-create your CD pipeline using the new self-hosted agent.
The rest of the SSH task configuration depends on your needs. But if you want to test the SSH connection works, just print something:
echo 'I'm connected'
After this you CD pipeline should be working fine.
More details on how to created the Self-Hosted Agent on Windows. There are also links for Linux and Mac.
I had a similar issue with a VM in Azure. It turned out I had set the security group to only allow SSH in from my local network and Azure Dev-Ops agents obviously run in a Microsoft network and were coming from a different IP Address range. The solution was to open up SSH to all source IP Addresses. You can get the list of IP address ranges Dev-Ops agents use but they appear to change every week which isn't very helpful.
See https://learn.microsoft.com/en-us/azure/devops/organizations/security/allow-list-ip-url?view=azure-devops#microsoft-hosted-agents
I am testing a a business network I created, I ran the Composer-rest-server and all worked fine, then shut the server as suggested in the developers guide , then I proceeded use the yo hyperledger composer to create the skeleton of the angular app, however, now the angular app is showing in the local browser, however, the composer-rest- server is not.
Expected Behavior:
I should start the composer-rest- server in localhost:3000 and the angular app as well
Actual Behavior:
I get this message;
scovering types from business network definition ...
Connection fails: Error: Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
It will be retried for the next request.
Exception: Error: Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
Error: Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
at _checkRuntimeVersions.then.catch (/home/node/.nvm/versions/node/v6.11.2/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:696:34)
Your Environment
composer-cli#0.11.3
generator-hyperledger-composer#0.11.3
composer-rest-server#0.11.3
Docker version 17.06.0-ce, build 02c1d87
docker-compose version 1.13.0, build 1719ceb
The Problem
If you kill your fabric instance using ./stopFabric that you started using the ./startFabric command then all the containers that were apart of the business network were killed as well and therefore you need to reinstall the .bna and start the network again. (the development flow provided is purposely volatile for rapid development)
The Solution
1.) Type docker ps to see all of your running containers. You should see none if you are getting that error because your peer is not responding to pings
2.) Open a separate terminal and navigate to where you have fabric-dev-servers in the terminal and run ./fabricStart. This will start all the containers like your network Certificate Authority, the peer, the orderer, etc.
3.) Return to your project in another terminal. Do Step 1 & 2 found at the developer tutorial (you likely won't need to do step 3 since you likely already imported the network administrator identity going through the tutorial)
4.) Run composer network ping --card admin#tutorial-network. The ping should go through.
5.) Run docker ps. You should see 4 containers running
6.) Run composer-rest-server and follow the steps from the tutorial.
7.) Run cd tutorial-network-app to switch to where your angular application is (or wherever you generated it with the yo command)
8.) Navigate to http://localhost:3000 and everything should work.
Any other questions or problems just reply here and I can help.
The expected behaviour is that the REST server is already running (the the generator uses Loopback to spin up a REST server already (that's why you shut down the previous REST server)). Its described here https://hyperledger.github.io/composer/unstable/tutorials/developer-guide.html under 'Generate your Skeleton Web Application'.
After you created the application - following completion of the yo hyperledger-composer questions (and after providing the answers) you run your application using npm start from within the generated application directory. Your app is accessible at http://localhost:4200.