I'm trying to test a program on localnet which makes numerous cross-program invocations (CPIs).
Is there an easy way to initialize a localnet cluster with all the accounts copied over from mainnet-beta?
I know there is a clone flag on the solana-test-validator command however it would be impractical to use clone for all the accounts I need copied over.
It is impractical to invoke solana-test-validator from the command line to do this.
The approach I've taken is to use solana account to get the accounts to local files, then use "in code" initialization of the solana test validator to load those accounts and then test.
For the first part, you could rig up a script to invoke:
solana account -o LOCALFILE.json --output json-compact PUBLIC_KEY where it will fetch account associated with PUBLIC_KEY and put in LOCALFILE.json
Then, in rust (just an example using 2 accounts but it could be many more. More than likely you'd want to walk a well known directory to load from and just loop that to build the input Vec:
fn load_stored(tvg: &mut TestValidatorGenesis) -> &mut TestValidatorGenesis {
let mut avec = Vec::<AccountInfo>::new();
for i in 0..2 {
let akp = get_keypair(USER_ACCOUNT_LIST[i]).unwrap();
avec.push(AccountInfo {
address: akp.pubkey(),
filename: USER_STORED_LIST[i],
});
}
tvg.add_accounts_from_json_files(&avec)
}
/// Setup the test validator with predefined properties
pub fn setup_validator() -> Result<(TestValidator, Keypair), Box<dyn error::Error>> {
let vwallet = get_keypair(WALLET_ACCOUNT).unwrap();
std::env::set_var("BPF_OUT_DIR", PROG_PATH);
let mut test_validator = TestValidatorGenesis::default();
test_validator.ledger_path(LEDGER_PATH);
test_validator.add_program(PROG_NAME, PROG_KEY);
load_stored(&mut test_validator);
// solana_logger::setup_with_default("solana=error");
let test_validator =
test_validator.start_with_mint_address(vwallet.pubkey(), SocketAddrSpace::new(true))?;
Ok((test_validator, vwallet))
}
You can launch the validator with -um -c ADDRESS to preload accounts with the content of mainnet-beta. In practice that's often not feasible, as you simply would need to many accounts, but for small programs it does work.
As another alternative, you can try using this fork of the Solana monorepo, which aims to clone the entire state of the ledger from mainnet, and spins up a validator from it: https://github.com/DappioWonderland/solana
Note that I haven't used it and haven't audited it to be sure it doesn't do anything shady, but if it lives up to the promise, it should be exactly what you need!
Related
In #solana/web3.js when you create a transaction instruction, you specify pubkeys of your accounts, program id, and just raw bytes of your "data" parameter. In anchor program you declare your module to be a program with corresponding attribute, and now your pub functions become instructions. I could not find in the anchor book, how specifically they serialize their instruction name. How do I specify for my JavaScript frontend which instruction do I want it to execute?
How do I specify for my JavaScript frontend which instruction I want it to execute?
Anchor uses IDL(Interface Description Language) for this purpose. whenever you complete your Solana program you can build it(anchor build). With this command, you have exported idl in root/target/idl folder. You can deploy this file to Solana network and fetch it and make a program in any client like ts(typescript) because of its mapping. You can open up one of the IDL.json files for better understanding. with this file, you can call instructions or use accounts of your Solana program.
Also, you have another file with ts extension inside root/target/types. We use this file inside anchor test for creating a program with that and also for specifying which instruction or account we want to use. Also, this file is useful for creating anchor programs inside clients. Because this file contains "export const IDL.....". So, we can use this file for creating program like this:
import { PROGRAM_ID } from "./constants";//program account public key
import { IDL } from "./your_directory_of_programs";// directory of copy/paste types/your_program.ts file
export function getProgramInstance(connection, wallet) {
if (!wallet.publicKey) return;
const provider = new anchor.AnchorProvider(
connection,
wallet,
anchor.AnchorProvider.defaultOptions()
);
// Read the generated IDL.
const idl = IDL;
// Address of the deployed program.
const programId = PROGRAM_ID;
// Generate the program client from IDL.
const program = new anchor.Program(idl, programId, provider);
return program;
}
and call any instruction like this:
await program.methods
.yourInstruction()
.accounts({
})
.signers()
.rpc();
Read this part of the Solana Cookbook for more details of what's going on when we want to call instruction of program from TS or any other clients.
I have been trying to run the execute_sale ix from the mpl-auction-house package but I get this error in the logs I have got the sellInstruction and buyInstruction working
This is my Code
const executeSellInstructionAccounts:ExecuteSaleInstructionAccounts = {
buyer:buyerwallet.publicKey,
seller:Sellerwallet.publicKey,
tokenAccount:tokenAccountKey,
tokenMint:mint,
metadata:await getMetadata(mint),
treasuryMint:new anchor.web3.PublicKey(AuctionHouse.mint),
auctionHouse:new anchor.web3.PublicKey(AuctionHouse.address),
auctionHouseFeeAccount:new anchor.web3.PublicKey(AuctionHouse.feeAccount),
authority:new anchor.web3.PublicKey(AuctionHouse.authority),
programAsSigner:programAsSigner,
auctionHouseTreasury:new anchor.web3.PublicKey(AuctionHouse.treasuryAccount),
buyerReceiptTokenAccount:buyerATA.address,
sellerPaymentReceiptAccount:Sellerwallet.publicKey,
buyerTradeState:BuyertradeState,
escrowPaymentAccount:escrowPaymentAccount,
freeTradeState:freeTradeState,
sellerTradeState:SellertradeState,
}
const executeSellInstructionArgs:ExecuteSaleInstructionArgs = {
escrowPaymentBump:escrowBump,
freeTradeStateBump:freeTradeBump,
programAsSignerBump:programAsSignerBump,
buyerPrice:buyPriceAdjusted,
tokenSize:tokenSizeAdjusted,
}
const execute_sale_ix = createExecuteSaleInstruction(
executeSellInstructionAccounts,executeSellInstructionArgs
)
const execute_sale_tx = new anchor.web3.Transaction(
{
recentBlockhash: blockhash,
feePayer: Sellerwallet.publicKey,
}
)
execute_sale_tx.add(execute_sale_ix);
const execute_sale_res = await sprovider.sendAndConfirm(execute_sale_tx);
There is currently a discrepancy between the published AuctionHouse SDK and the underlying Rust program.
The console reference implementation is here: https://github.com/metaplex-foundation/metaplex/blob/master/js/packages/cli/src/auction-house-cli.ts
The console reference implementation works because it loads the idl directly from the chain and is therefore up to date. It bypasses the AuctionHouse SDK completely.
However, if you're doing this in the browser, you probably don't want to load the IDL from the chain. You'd need things like a decompression library and that would blow up your package size quite a bit.
To work around this, I've forked metaplex repo here: https://github.com/neftworld/metaplex
The fork above has the following changes:
Including the IDL definition as a typescript src file (correct as at 30 May 2022)
Fetching auctionHouse program from local IDL definition instead getting it from the chain
Hence, you can use this as a base for your web implementation. To make this work on the web, you will need to remove references to keypair - console uses a key pair file - and use the browser wallet to sign the transaction before sending.
I'm trying to retrieve with Rust the unique identifier of the MSV authentication package. For that, i'm trying to use the Windows API function LsaLookupAuthenticationPackage, but it is returning the NTSTATUS constant 0xC00000FE (STATUS_NO_SUCH_PACKAGE), which acording to the official documentation it means "A specified authentication package is unknown".
To call LsaLookupAuthenticationPackage, I'm using the official crate to access the Windows API. Below I detail the activities that I perform before calling LsaLookupAuthenticationPackage:
1.- I run my code as System.
2.- I successfully enable SeTcbPrivilege using the function RtlAdjustPrivilege.
3.- I successfully call LsaRegisterLogonProcess to open a handle to start the interaction with the LSA.
4.- I use this handle to call LsaLookupAuthenticationPackage. Below, I show the code that I'm using to call LsaLookupAuthenticationPackage:
let package:[char;20]= ['M','S','V','1','_','0','_','P','A','C','K','A','G','E','_','N','A','M','E','\0'];
let auth_package: STRING = STRING {Length:19, MaximumLength:19, Buffer:transmute(package.as_ptr())};
let auth_package_ptr: *const STRING = transmute(&auth_package);
let auth_id: *mut u32 = transmute(&u32::default());
let ret = LsaLookupAuthenticationPackage(handle,auth_package_ptr,auth_id);
I also tried to use an &str instead of the char array to store the auth package name, but it didn't work either.
For me it's obvious that the function is working, so the problem seems to be that I'm not passing correctly the auth package name.
I've got the integration tests using the simulator working on my main contract, and I have a second contract which does cross contracts call to it.
Can I create simulation test which loads both contracts and runs cross contract calls?
I started to try this and it would seem that I might be able deploy two contracts such as:
use near_sdk_sim::{call, init_simulator, deploy, UserAccount, ContractAccount};
use contract1::Contract1Contract;
use contract2::Contract2Contract; // <-- would need to do something different since contract2 is in a separate package
pub fn init() -> (UserAccount,
ContractAccount<Contract1Contract>,
ContractAccount<Contract1Contract>) {
let root = init_simulator(None);
let contract1 : ContractAccount<Contract1Contract> = deploy!(
contract: Contract1Contract,
contract_id: "contract1".to_string(),
bytes: &CONTRACT1_WASM_BYTES,
signer_account: root,
);
let contract2 : ContractAccount<Contract2Contract> = deploy!(
contract: Contract1Contract,
contract_id: "contract2".to_string(),
bytes: &CONTRACT2_WASM_BYTES,
signer_account: root,
);
(root, contract1, contract2)
}
I realized since the two contracts are in different packages I wasn't sure how to import the structure for contract2 one into the integration tests for contract1.
Is there a way to do this and/or and example to look at?
Does the simulator even support this? (i.e. deploying two contracts in the same test)
Yes, you can find more here about cross contract calls: https://github.com/near-examples/cross-contract-calls
Please be aware that some of the dependencies in this repo may be a little dated at this point, but the concepts remain in tact for sure.
I have a secret key-value pair in Secrets Manager in Account-1 in us-east-1. This secret is encrypted using a Customer managed KMS key - let's call it KMS-Account-1. All this has been created via console.
Now we turn to CDK. We have cdk.pipelines.CodePipeline which deploys Lambda to multiple stages/environments - so 1st to { Account-2, us-east-1 } then to { Account-3, eu-west-1 } and so on. This has been done.
The lambda code in all stages/environments above, now needs to be changed to use the secret key-value pair present with Account-1's us-east-1 SecretsManager by getting it via secretsmanager client. That code should probably look like this (python):
client = boto3.session.Session().client(
service_name = 'secretsmanager',
region_name = 'us-east-1'
)
resp = client.get_secret_value(
SecretId='arn:aws:secretsmanager:us-east-1:<ACCOUNT-1>:secret:name/of/the/secret'
)
secret = json.loads(resp['SecretString'])
All lambdas in various accounts and regions (ie. environments) will have the exact same code as above since the secret needs to be fetched from Account-1 in us-east-1.
Firstly I hope this is conceptually possible. Is that right?
Next how do I change the cdk code to facilitate this? How will the code-deploy in code-pipeline get permission to import this custom kms key and SecretManager' secretand apply correct permissions for cross account access by the lambdas that the cdk pipeline creates ?
Can someone please give some pointers?
This is bit tricky as CloudFormation, and hence CDK, doesn't allow cross account/cross stage references because CloudFormation export doesn't work cross account as far as my understanding goes. All these patterns of "centralised" resources fall into that category - ie. resource in one account (or a stage in CDK) referenced by other stages.
If the resource is created outside the context of CDK (like via console), then you might as well hardcode the names/arns/etc. throughout the CDK code where its used and that should be sufficient.
For resources that have the ability to hold resource based policies, it's simpler as you can just attach the cross-account access permissions to them directly - again, offline via console since you are maintaining it manually anyway. Each time you add a stage (account) to your pipeline, you will need to go to the resource and add cross-account permissions manually.
For resources that don't have resource based policies, like SSM for eg., things are a bit roundabout as you will need to create a Role that can be assumed cross-account and then access the resource. In that case you will have to separately maintain the IAM Role too and manually update the trust policy to other accounts as you add stages to your CDK pipeline. Then, as usual hardcode the role arn in your CDK code, assume it in some CustomResource lambda and use it.
It gets more interesting if the creation is also done in the CDK code itself (ie. managed by CloudFormation - not done separately via console/aws-cli etc.). In this case, many times you wouldn't "know" the exact ARNs as the physical-id would be generated by CloudFormation and likely be a part of the ARN. Even influencing the physical-id yourself (like by hardcoding the bucket name) might not solve it in all cases. Eg. KMS ARNs and SecretManager ARNs append unique-ids or some sort of hashes to the end of the ARN.
Instead of trying to work all that out, it would be best left untouched and let CFn generate whatever random name/arn it chooses. To then reference these constructs/ARNs, just put them into SSM Parameters in the source/central account. SSM doesn't have resource based policy that I know of. So additionally create a role in cdk that trusts the accounts in your cdk code. Once done, there is no more maintenance - each time you add new environments/accounts to CDK (assuming its a cdk pipeline here), the "loop" construct that you will create will automatically add the new account into the trust relationship.
Now all you need to do is to distribute this role-arn and the SSM Parameternames to other stages. Choose an explicit role-name and SSM Parameters. The manual ARN construction given a rolename is pretty straightforward. So distribute that and SSM Parameters around your CDK code to other stages (compile time strings instead of references). In target stages, create custom-resource(s) (AWSCustomResource) backed by AwsSdkCall lambda to simply assume this role-arn and make the SDK call to retrieve the SSM Parameter values. These values can be anything, like your KMS ARNs, SecretManager's full ARNs etc. which you couldn't easily guess. Now simply use these.
Roundabout way to do a simple thing, but so far that is all I could do to get this to work.
#You need to maintain this list no matter what you do - so it's nothing extra
all_other_accounts = [ <list of accounts that this cdk deploys to> ]
account_principals = [iam.AccountPrincipal(a) for a in all_other_account]
role = iam.Role(
assumed_by = iam.CompositePrincipal(*account_principals), #auto-updated as you change the list above
role_name = some_explicit_name,
...
)
role_arn = f'arn:aws:iam::<account-of-this-stack>:role/{some_explicit_name}'
kms0 = kms.Key(...)
kms0.grant_decrypt(role)
# Because KMS also needs explicit resource policy even if role policy allows access to it
kms0.add_to_role_policy(iam.PolicyStatement(principals = [iam.ArnPrincipal(role_arn)], actions = ...))
kms1 = kms.Key(...)
kms1.grant_decrypt(role)
kms0.add_to_role_policy(... same as above ...)
secrets0 = secretsmanager.Secret(...) #maybe this is based off kms0
secrets0.grant_read(role)
secrets1 = secretsmanager.Secret(...) #maybe this is based off kms1
secrets1.grant_read(role)
# You can turn all this into a loop ofc.
ssm0 = ssm.StingParameter(self, '...', parameter_name = 'kms0_arn', string_value = kms0.key_arn, ...)
ssm0.grant_read(role)
ssm1 = ssm.StingParameter(self, '...', parameter_name = 'kms1_arn', string_value = kms1.key_arn, ...)
ssm1.grant_read(role)
ssm2 = ssm.StingParameter(self, '...', parameter_name = 'secrets0_arn', string_value = secrets0.secret_full_arn, ...)
ssm2.grant_read(role)
...
#Now simply pass around the role and ssm parameter names
for env in environments:
MyApplicationStage(self, <...>, ..., role_arn = role_arn, params = [ 'kms0_arn', 'kms1_arn', ... ], ...)
And then in the target stage(s):
for parm in params:
fn = AwsSdkCall('ssm', 'get_parameter', { "Name": param }, ...)
acr = AwsCustomResource(..., on_create = fn, on_update = fn, ...)
collect['param'] = acr.get_response_field('Parameter.Value')
Now do whatever you want with the collected artifacts, including supplying them as environment variables to your main service lambda (which will be resolved at deploy time).
Remember they will all be Tokens and resolved only at deploy time, but that's true of any resource, whether or not via custom-resource and it shouldn't matter.
That's a generic pattern which should work for any case.
(GitHub link where this question was asked and I had answered it there too)