I'm trying to create token/nft for testing my transfer method in unit test.
Here is my code:
const connection = new Connection("https://api.devnet.solana.com");
const myKeypair = web3.Keypair.generate();
const fromAirdropSignature = await connection.requestAirdrop(myKeypair.publicKey, 2 * anchor.web3.LAMPORTS_PER_SOL);
await connection.confirmTransaction(fromAirdropSignature);
let minter = await splToken.createMint(connection, myKeypair, myKeypair.publicKey, null, 1, web3.Keypair.generate(), null, splToken.TOKEN_PROGRAM_ID)
I tried this way but when I run anchor test, sometimes requestAirdrop doesn't work. Is this the right way to do? How can I fix this?
When using devnet, you will always be limited in how much you can airdrop. You can get around that by doing one airdrop and reusing the same myKeypair for all of your test runs.
The best environment for unit tests, however, is a local solana-test-validator, which has higher limits for airdrops, and can be destroyed / recreated anytime.
Before running your tests, in a separate shell, you can run:
$ solana-test-validator -r
And then connect to it using:
const connection = new Connection("http://localhost:8899");
Related
I am building a pet project like a multiplayer quiz using Next JS deployed on Vercel.
Everything works perfect on a localhost, but when I deploy it on Vercel as a cloud function (in the API route) I meet a problem that serverless function can only last for 10 seconds.
So I want to understand what is the best practice to handle the problem.
version of the cycle in an api route looks like this:
export async function quizGameProcess(
roomInitData: InitGameData,
questions: QuestionInDB\[\],
) {
let questionNumber = 0;
let maxPlayerPoints = 0;
const pointsToWin = 10;
while (maxPlayerPoints \< pointsToWin) {
const currentQuestion = questions\[questionNumber\];
// Wait 5 seconds before start
await new Promise(resolve =\>
setTimeout(resolve, 5000),
);
// Show question to players for 15 seconds
await questionShowInRoom(roomInitData, currentQuestion);
await new Promise(resolve => setTimeout(resolve, 15000)); <====== Everything works great until this moment
// Show the right answer for 5 seconds
await AnswerPushInRoom(roomInitData);
await new Promise(resolve =>
setTimeout(resolve, 5),
);
maxPlayerPoints = await countPlayerPoints(roomInitData)
...
questionNumber++
So i need 15 seconds to show players the question and cloud function returns error while invoking it.
questionShowInRoom() function just changes a string in the database from :
room = {activeWindow: prepareToStart}
to
room = {activeWindow: question}
after 15 seconds it must change it to:
room = {activeWindow: showAnswer}
So the function must return something before 10 seconds, but if you return something - the route stops execution.
I cant use VPS because the project must stay as one Next JS project folder and must be easy maintained in one place and be free.
So if i divide the code - and make some 'worker', how it should be invoked? By some other route? isnt that a bad practice?
Or of it will be the frontend just making polling every second trying to invoke it until timestamp difference become more than 15 seconds.. looks like a strange decision.
So what is the best practice to handle the problem?
In creating a simple program, I can't get Solana to use the devnet for its RPC connection. I keep getting the following error:
{
blockhash: '7TTVjRKApwAqP1SA7vZ2tQHuh6QbnToSmVUA9kc7amEY',
lastValidBlockHeight: 129662699
}
Error: failed to get recent blockhash: FetchError: request to http://localhost:8899/ failed, reason: connect ECONNREFUSED 127.0.0.1:8899
at Connection.getRecentBlockhash (/home/simeon/dev/freelance/niels_vacancies/node_modules/#solana/web3.js/lib/index.cjs.js:6584:13)
even though I have set all of my settable constants like ANCHOR_PROVIDER_URL=https://api.devnet.solana.com, or the relevant entries in my Anchor.toml file. I also explicitly specify the following:
const connection = new anchor.web3.Connection("https://api.devnet.solana.com/", {commitment: "max"});
const wallet = anchor.Wallet.local();
const provider = new anchor.Provider(
connection,
wallet,
{
commitment: "max",
preflightCommitment: "max",
skipPreflight: false
}
)
I even test console.log(await anchor.getProvider().connection.getLatestBlockhash()); to ensure that I can, in fact, get a blockhash from the devnet. What can I do to force the RPC calls to do so too?
You just have to set the Anchor.toml cluster to devnet and programs.devnet and then deploy the program using a wallet with devnet-sol. I will drop an Anchor.toml for devnet.
[features]
seeds = false
[programs.devnet]
first_program = "FPT...bd3"
[registry]
url = "https://anchor.projectserum.com"
[provider]
cluster = "devnet"
wallet = "PATH/TO/WALLET/WHO/WILL/PAY/FOR/DEPLOY.json"
[scripts]
test = "yarn run ts-mocha -p ./tsconfig.json -t 1000000 tests/**/*.ts"
in this case the first_program is the program_id declared on the declare_id macro.
Then you can use ur test file totally normal with anchor.setProvider(anchor.Provider.env());
If you have already updated the anchor.toml to use devnet, and are having this issue with program.provider.connection.whatever or program.account.whatever.fetch.whatever, make sure that you have set the anchor provider BEFORE creating the program, e.g:
const provider = AnchorProvider.env();
anchor.setProvider(provider);
must come before the line
const program: Program<Whatever> = workspace.Whatever;
Came across an interesting problem today that I have been having trouble figuring out and I wanted to test the waters to see if anyone know if this is possible. I'm currently in the process of setting up nvim-dap and for my situation I need to be able to run debuggers for multiple processes. Suppose I had a configuration that looked like this
dap.adapters.node2 = {
type = 'executable',
command = 'node',
args = {os.getenv('HOME') .. '/dev/microsoft/vscode-node-debug2/out/src/nodeDebug.js'},
}
dap.configurations.javascript = {
{
type = 'node2',
request = 'attach',
name = 'program 1',
port = 9229,
},
{
type = 'node2',
request = 'attach',
name = 'program 2',
port = 7000,
},
{
type = 'node2',
request = 'attach',
name = 'program 3',
port = 5035,
},
}
Then when I use :lua require'dap'.continue() I will get the option to listen to only one of those running processes. Does anyone know if it's possible to get a debugger going which is able to attach to all of these processes in the same nvim session? Bonus points if it can attach to a chrome process as well for frontend debugging!
It looks like with launch configurations in VS code you could use something like a compound launch compound launch configuration , however, I couldn't find any resource for getting similar functionality with attaching and in nvim-dap.
I'd love to see how you all are solving a similar problem!
Following code is simple code to check how many entities can be added per second or minute.
createAsset is calling backend(http:localhost:3000) and add data using post.
When I did test using this code, it took 23 seconds to add 10 entities.
I am using composer 0.19.12 and fabric 1.1. When I checked some thread from GitHub, performance has improved using indexing couchdb. How can I use that feature? (I need to check again, but it seems that it is default feature of recent composer version)
addEntities: async function() {
var start = 0;
var end = start + 100;
var sd = new Date();
console.log(sd.getHours()+':'+sd.getMinutes()+':'+sd.getSeconds()+'.'+sd.getMilliseconds());
for(var i = start; i<end; i++) {
entityData.id = i.toString();
await this.createAsset('/Entity', 'model.Entity', entityData);
}
var ed = new Date();
var totalTime = new Date(ed.getTime()-sd.getTime());
console.log(totalTime.getMinutes()+':'+totalTime.getSeconds()+'.'+totalTime.getMilliseconds());
},
My model is really simple as follows.
asset Entity identified by id {
o String id
}
I have changed the test code to send multiple transactions as follows following david_k's advice.
addEntities: async function() {
var start = 15000;
var dataNumber = 1200;
var loopNumber = 400;
var end = start + dataNumber;
var sd = new Date();
console.log(sd.getHours()+':'+sd.getMinutes()+':'+sd.getSeconds()+'.'+sd.getMilliseconds());
var tasks = [];
for(var i = start; i<end; i++) {
entityData.id = i.toString();
if((i-start)%loopNumber === loopNumber - 1) {
await this.createAsset('/Entity', 'model.Entity', entityData);
console.log('--- i: ' + i + ' loops completed');
}
else {
this.createAsset('/Entity', 'model.Entity', entityData);
}
}
var ed = new Date();
var totalTime = new Date(ed.getTime()-sd.getTime());
console.log(totalTime.getMinutes()+':'+totalTime.getSeconds()+'.'+totalTime.getMilliseconds());
},
The purpose of change is send multiple requests at the same time, and it seems work well because it shows much better performance compared to previous code. However, the performance is still around 8 TPS. As original test code was 1 transaction per 2sec~3sec, it improved a lot. But, 8TPS looks that it cannot be used for commercial application at all. Even it is not good for test purpose as well. Could someone give some advice for this?
That sounds about right looking at your example code and I am assuming you are using either the fabric-dev-servers package which is a very simple fabric network to help get users started with developing a business network and want to try out on a hyperledger fabric network, or you are using the byfn network from the multi-org tutorial which is a hyperledger fabric example of a 2 organisation network in a consortium to demonstrate the required operational steps of composer in a multi-org fabric setup.
Hyperledger Fabric is a distributed ledger technology based around eventual consistency. Composer implements a submit/notify model such that once a transaction has been submitted it will notify the client when that transaction has been committed to the ledger. You can configure which Peers in a network you are interested in informing you when that occurs, but the default is all of them and so the rest server responds once all peers have committed it to the ledger.
Hyperledger fabric doesn't commit individual transactions, it batches them up into blocks and these blocks get committed to the ledger, and it will wait a period of time before building that block with the current set of transactions that have been submitted for ordering, so blocks can contain one or more transactions. You need to configure fabric for your use case to determine how transactions are batched into blocks.
I'm wondering if there is a way to get mocha to list all the tests that it will execute. I don't see any reasonable options when I list them using mocha --help; There are several reporters, but none appear designed to list the files that will be processed (or name the tests that will be run).
The way reporters work is by listening for events dispatched by mocha, these events are only dispatched when running a real test.
A suite contains a list of tests, so there is the info that you need. However, the suite is usually only initialized on run().
If you are ok with running mocha from nodejs instead of from the command line you can create this diy solution based on the code from Using-mocha-programmatically:
var Mocha = require('mocha'),
fs = require('fs'),
path = require('path');
// First, you need to instantiate a Mocha instance.
var mocha = new Mocha(),
testdir = 'test';
// Then, you need to use the method "addFile" on the mocha
// object for each file.
// Here is an example:
fs.readdirSync(testdir).filter(function(file){
// Only keep the .js files
return file.substr(-3) === '.js';
}).forEach(function(file){
// Use the method "addFile" to add the file to mocha
mocha.addFile(
path.join(testdir, file)
);
});
// Here is the code to list tests without running:
// call mocha to load the files (and scan them for tests)
mocha.loadFiles(function () {
// upon completion list the tests found
var count = 0;
mocha.suite.eachTest(function (t) {
count += 1;
console.log('found test (' + count + ') ' + t.title);
})
console.log('done');
});