Hey guys attempting to test deployment of the example api get contract provided in chainlink docs. Is there a reason why
https://docs.chain.link/docs/make-a-http-get-request/
fails to deploy?
Even when using a polygon supported oracle and job id? Is something wrong with the v8 example contracts? compiles fine, but cannot deploy -
execution reverted
.
Initially I modified using it as a base example for my own version. That failed and now revisiting the original contract, this fails too. Telling me that there is an issue with this v8 example.
There are no changes to the contract in the link. Copy paste into remix and attempt to deploy to polygon main net. Execution reverted
Thanks!
need to set chainlink token and comment out constructor function of
setPublicChainlinkToken();
Related
We have a Cloud Service that we have been deploying/updating without issue. In the past two weeks every time we try to deploy the package we are getting the error "Deployment could not be created - There was an error processing your request. Try again in a few moments".
I am at a loss as to how to even debug the issue to get more detail. if anybody has any advice on how to get a better error description would be appreciated.
The only changes in this deployment are some changes to the static files in the package so it is unclear what is causing the issue. The process we use is (1) build the package, (2) upload the package, (3) deploy in the staging environment. The package gets uploaded but fails to deploy (step 3).
Any help as to what the issue is or how to get better diagnostic information woudl be great.
When uploading the wasm binary, as a compiled smart contract on the 'ink-workshop' - the canvas-ui that I am running just errors.
Canvas-ui error:
Uncaught error. Something went wrong with the query and rendering of this component.
Cannot read properties of undefined (reading 'args')
Step: 'Drag the flipper.contract file that contains the bundled wasm blob....'
The Github issues section associated is read-only, hence why I am posting here. Anyone know what might be the problem here?
Incase others are wondering what to do in this instance. I was able to deploy using the official polkadot web interface.
Connect your local running node via settings and there you can deploy your flipper contract.
This is a good article on the astar network docs on how to do it, using the basic flipper contract example.
There is a maintenance problem and they wanna fix it.
https://github.com/paritytech/canvas-ui/issues/119#issuecomment-1020114242
We've been experiencing issues with DockerImageFunction. All deployments fail with the following (and very cryptic to me) error:
Lambda function XXX failed to stabilize since it is in InProgress state
Other functions deploy without any problems. Deployments were working fine until Wednesday 07.04.2021. Since then, they fail every time. We haven't changed anything in our CDK code, in that function Dockerfile or its code.
We deploy using cdk typescript. I tested with 1.93 and 1.97 (latest version at the time of this writing).
Any clue ?
This looks to be a bug that Amazon will need to resolve. However...
In the event someone needs a temporary workaround to deploy their CDK stack which includes a DockerImageFunction and they do not want to delete the whole stack first (perhaps because some resources are S3 buckets with important data), here are some steps that worked for me. This assumes your stack is in the state described above, i.e. an update has failed, the system attempted a rollback, and then the update rollback also failed.
From the CloudFormation console select "Continue update rollback"
Select "advanced options" and choose to skip the function or functions that use the containerized deployment (i.e. DockerImageFunctions)
The rollback should now complete successfully
If you try to deploy again now the stack will return to the UPDATE_ROLLBACK_FAILED state again, so don't bother. Instead comment out all the code that instantiates and references the DockerImageFunctions in your CDK stack class. Then perform the deployment, which should remove those functions and their various roles and permissions from the CloudFormation stack.
Once this is complete you can uncomment all the stack code you just commented out and perform a final deploy. This one should succeed. It did for me at least: all the latest version of my application is deployed.
It seem likely that if I perform another deploy after this the same error will occur and I will have to go through these five steps again. I haven't tried it yet. But at least this is workaround, however clumsy.
FYI - This issue should be resolve. Confirmed with AWS support and our stacks.
In description of Google Cloud Build app on GitHub here https://github.com/marketplace/google-cloud-build every build seems to be identifiable by name:
Cloud Build Triggers With Names
In my current set up, however, every build is displayed by id, which is not very useful:
Cloud Build Triggers with IDs
Is there something I am not doing to make it work as expected?
When I think about Cloud Builds, I understand that every individual build has a unique identifier. For me, this is what I would want to see in a report of a build being triggered. Given a Cloud Build ID, I can then use that for visibility into the underlying process that unique instance of the build caused. I can see all the steps and the outcome of each of them. I couldn't imagine wanting anything else than a build identifier being reported to me as a result of a Cloud Build being performed.
References:
Viewing build results
I posted the original issue in the issuetracker in January 2020,
This is not since fixed and identified real names since August 17th. You do have to reclick on the new names if you have made any previous hash based ones required steps in your build.
For any triggers created prior to August 2020, data sharing needs to be enabled for this to work, As documented here:
https://cloud.google.com/cloud-build/docs/automating-builds/create-github-app-triggers#data_sharing
You need to go in the Settings -> Data Sharing section of cloud build and enable it
In some cases you might get a Failed to enable trigger data sharing error when trying to do this, if so you might want to try
1 - Disable any required checks you have in github related to cloud build
2 - Try doing it using chrome without any ad blocker, for me it wasn't working using Brave browser but worked with chrome
This will allow github to show the trigger name instead of an id in the pull request checks
I have a single repository that hosts my lambda functions on github. I would like to be able to deploy the new versions whenever new logic is pushed to master.
I did a lot of reasearch and found a few different approaches, but nothing really clear. Would like to know what others feel would be the best way to go about this, and maybe some detail (if possible) into how that pipeline is setup.
Thanks
Welcome to StackOverflow. You can improve your question by reading this page.
You can setup a CI/CD pipeline using CircleCI with its GitHub integration (which is an online Service, so you don't need to maintain anything, like a Jenkins server, for example)
Upon every commit to your repository, a CircleCI build will be triggered. Once the build process is over, you can declare sls deploy, sam deploy, use Terraform or even create a script to upload the .zip file from your GitHub repo to an S3 Bucket and then, within your script, invoke the create-function command. There's an example how to deploy Serverless applications using CircleCI along with the Serverless Framework here
Other options include TravisCI, AWS Code Deploy or even maintain your own CI/CD Server. The same logic applies to all of these tools though: commit -> build -> deploy (using one of the tools you've chosen).
EDIT: After #Matt's answer, it clicked that the OP never mentioned the Serverless Framework (I, somehow, thought he was already using it, so I pointed the OP to tutorials using the Serverless Framework already). I then decided to update my answer with a few other options for serverless deployment
I know that this isn't exactly what you asked for but I use Serverless Framework (https://serverless.com) for deployment and I love it. I don't do my deployments when I push to my repo. Instead I push to my repo after I've deployed. I like this flow because a deployment can fail due to so many things and pushing to GitHub is much less likely to fail. I this way, I prevent pushing code that failed to deploy to my master branch.
I don't know if you're familiar with the framework but it is super simple. The website describes the simple steps to creating and deploy a function like this.
1 # Step 1. Install serverless globally
2 $ npm install serverless -g
3
4 # Step 2. Create a serverless function
5 $ serverless create --template hello-world
6
7 # Step 3. deploy to cloud provider
8 $ serverless deploy
9
10 # Your function is deployed!
11 $ http://xyz.amazonaws.com/hello-world
There are also a number of plugins you can use to integrate easily with custom domains on APIGateway, prune older versions of lambda functions that might be filling up your limits, etc...
Overall, I've found it to be the easiest way to manage and deploy my lambdas. Hope it helps!
Given that you're using AWS Lambda, you may want to consider CodePipeline to automate your release process. [SAM(https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html) may also be interesting.
I too had the same problem. I wanted to manage 12 lambdas with 1 git repository. I solved it by introducing travis-ci. travis-ci saved the time and really useful in many ways. We can check the logs whenever we want and you can share the logs to anyone by sharing the URL. The sample documentation of all steps can be found here. You can go through it. 👍