How to test blokchain lottery using Chainlink VRFv2 and Chainlink Keeper? - chainlink

Thanks to Patrick and after reading the Chainlink's blog on how to build blockchain lottery, I am eager to build a similar "version". It will be using the latest Chainlink's version of VRFv2 and Keeper.
However, the supported test networks for VRFv2 and Keeper are Rinkeby and Kovan, respectively.
So any hint on how to approach this?

For now the one of the only testnet's both Keepers and VRF v2 are available on is the BNB testnet. You can test your application there until VRF v2 and keepers are available on the same Ethereum testnet.
References:
Keepers Supported Chains
VRF v2 Supported Chains

Related

List all the scheduled snapshots in a given project and region programmatically (golang)

I am trying to use a golang client to programmatically list all the scheduled snapshot policies in a given project and region and describe them.
I am able to fetch them using gcloud commands, but wondering how I can do the same programmatically (preferably compute golang client)?
gcloud compute resource-policies list --project myproject
gcloud compute resource-policies describe my-snapshot-policy --project myproject --region myregion
thanks in advance.
Per #john-hanley, you are encouraged to demonstrate your own attempt to solve the problem in your question.
Google provides SDKs for all of its services. There are 2 flavors and this can be confusing. The original style which you can find for any Google service are called API Client Libraries. For Google Cloud Platform many (!) of the services also (!) have Cloud Client Libraries. See Google Client Libraries Explained.
For Compute for Golang, there's a new Cloud Client Library.
You can see examples of its use here. I encourage you to follow Google's style including by using Application Default Credentials.
You will want to use a ResourcePoliciesClient and the client's Get and List methods.

Conditionally enable x-ray for API Gateway and Lambda in serverless framework

I am trying to enable x-ray only when I needed to save some bucks. The following serverless.yml loads the environment variables from the .env file. However, it seems like serverless only allows true, Active and PassThrough. Any possible way to bypass this? Thanks.
# serverless.yml
provider:
name: aws
runtime: nodejs10.x
logs:
restApi: ${env:ENABLE_X_RAY, false}
tracing:
apiGateway: ${env:ENABLE_X_RAY, false}
lambda: ${env:ENABLE_X_RAY, false}
plugins:
- serverless-dotenv-plugin
# .env
ENABLE_X_RAY=true
If the entry point of your service is API Gateway you can configure Sampling Rules and limits on the AWS X-Ray console or using API to control the number of requests that are sampled by X-Ray.
See this article for an introduction to sampling in X-Ray:
https://aws.amazon.com/blogs/aws/apigateway-xray/
Let me know if you have further questions regarding this.
Update
Sampling rules may be specified only in X-Ray.
https://docs.aws.amazon.com/xray/latest/devguide/xray-console-sampling.html
This allows you to limit the number of traces no matter how many API Gateway or EC2 instances you have for handling your requests.
Small caveat: As of today, this mode of sampling is supported only if the entry point is API Gateway or if you have the >2.0 version of X-Ray daemon running on your instances (EC2 or otherwise). If the entry point is lambda this sampling effect is not supported today but will be supported soon.
In your case it seems you are using API Gateway as your entry point, so you can definitely configure sampling rules in X-Ray console and have that take effect globally across all your API Gateway endpoints.
You can also configure different sampling rules for different URLs like /auth is sampled at 5 TPS and /products is configured for 1 TPS with different reservoirs based on your usecase.

Unit Test GraphQL schemas/queries made in AWS AppSync?

I have a simple question: is there a way/program/method to create unit tests to test the API Url generated on AWS AppSync to verify the validity of created GraphQL schemas, queries, mutations, etc?
There is an open-source AppSync Serverless plugin which has offline emulator support. You may find it useful: https://github.com/sid88in/serverless-appsync-plugin#offline-support
Another good recommendation is to have two separate AppSync APIs. One API is hosting you production traffic. The other is to test changes before they go to production. This is significantly easier if you use Cloudformation (highly recommended) to manage your infrastructure.
If you want to validate your API is working periodically (every minute or so), you could create a canary like the following:
Create a Lambda function which runs on a schedule. This lambda function will make various GraphQL requests. It can emit success/failure metrics to CloudWatch.
Setup a CloudWatch alarm so you can be notified if your success/failure metric is out of the ordinary.
For the canary use-case see:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html
There is also amplify amplify-appsync-simulator package that is supposed to help with testing appsync, but there is no documentation on how you should use it. It is used by serverless-appsync-simulator Michael wrote and Amplify itself.

Hyperledger composer for enterprise

I saw that Hyperledger composer is really easy to work with, also we can deploy the composer code on hyperledger fabric. But I found that they create only one channel, and we will have the flexibility to talk to the specific peers like what we can do in GoLang\fabric, can we call external web services from composer java script code as we can do in Golang on fabric?
I'm wondering, can we use hyperledger composer for enterprise blockchain applications? or just we use it to create blockchain POCs.
Regards.
Bassel Kh
Hyperledger Composer is intended to be used for enterprise blockchain applications, although Composer Playground is not intended for Production use.
Composer connects with Fabric using Business Network Cards, and these contain credentials and a Connection Profile. The connection profile contains definitions of the 'services' on the Fabric, and so it is possible to define specific Peers within the card.
Regarding channels - it is again the Connection Profile that determines the Channel used, and at the moment only one channel is supported per Card. Multiple cards can be used but disconnect/switch/re-connect might not be practical or desirable in some cases.
There is a way to connect to a different Business Network on a different Channel covered in this tutorial, but again it might not be suitable for all cases.
There is an outstanding issue on GitHub for using Composer for multi-channel, so you can leave a comment or +1 on it - particularly if you have a good use case for multi-channel.
Many people are thinking of and using channels as a security feature, but Composer ACLs might solve that issue in some cases. Similarly the upcoming sidedb feature in Fabric might offer security instead of separate channels.
yes, you can call external web services and get the results back into your smart contract code or client see -> https://hyperledger.github.io/composer/latest/integrating/call-out
yes Hyperledger Composer is intended for Enterprise blockchain applications. Your applications will use Composer client to write application data to the ledger, and its production runtime is where the 'chaincode' smart contract/business network is deployed/installed on the peers (just like Go chaincode is similarly deployed). One such provider using Composer is here -> https://ibm-blockchain.github.io/platform-deployment/
Finally also see here -> https://github.com/hyperledger/composer-knowledge-wiki/blob/latest/knowledge.md#production for more info.

loopback connector for ElasticSearch

There are at least two different packages available on npm, loopback-connector-elastic-search and loopback-connector-es. I have not been able to connect my very basic Loopback api to my ES instance, and the sparse documentation on these two connectors is not helping.
Any guidance would be really appreciated on how I can create an API for my app using Loopback and ElasticSearch.
Originally loopback-connector-elastic-search was published by drakerian but hasn't been under development since Oct 1st, 2014 if you peek into the commits: https://github.com/drakerian/loopback-connector-elastic-search
loopback-connector-es is a fork from that original effort and is currently under active development so please use that.
https://github.com/strongloop-community/loopback-connector-elastic-search
And you'll notice that it is hosted on github under strongloop-community which means it has a future even if the current author (me) gets hit by a truck :)
If even after referring to the instructions here: https://github.com/strongloop-community/loopback-connector-elastic-search#loopback-connector-elastic-search ... you have questions then just jump into the chat room to get some answers: https://gitter.im/strongloop-community/loopback-connector-elastic-search
Loopback connector for elasticsearch is being maintained actively by me at https://github.com/strongloop-community/loopback-connector-elastic-search
But the connector package is published in npm under different name 'loopback-connector-esv6'.
Here is the link https://www.npmjs.com/package/loopback-connector-esv6.
This connector supports both ElasticSearch 6.x and 7.x and requires minimum npm version 6.9.0.
This connector for now supports only Loopback 3.x and planning to Loopback 4.x in near future.

Resources