I am currently trying to increase the Compute-Budget on my Solana Program on devnet. I am using solana version 1.9.9.
I have checked the anchor discord, and found this implementation to request a larger compute-budget. however, when I run this code-snippet
const data = Buffer.from(
Uint8Array.of(0, ...new BN(256000).toArray("le", 4))
);
const additionalComputeIx: TransactionInstruction = new TransactionInstruction({
keys: [],
programId: new PublicKey("ComputeBudget111111111111111111111111111111"),
data,
});
...
I get invalid instruction data. any idea why this could be?
I was getting Compute Budget Exceeded before adding this instruction to the transaction.
You're very close! The instruction definition for requesting an increase from the compute budget program contains a little-endian u32 for the units, but then also a little-endian u32 for the additional fee to add, which you must include, even if it's 0. So instead, you should try:
const data = Buffer.from(
Uint8Array.of(0, ...new BN(256000).toArray("le", 4), ...new BN(0).toArray("le", 4))
);
More information about the instruction at https://github.com/solana-labs/solana/blob/a6742b5838ffe6f37afcb24ab32ad2287a1514cf/sdk/src/compute_budget.rs#L10
Related
I've written a couple contracts for L1 (Ethereum) and L2 (Starknet) and have them communicate here.
I can see that L1 sent the message I'm expecting, see this TX on etherscan. The last message in there however never executed on my L2 contract. I'm trying to figure out whether the L2 Sequencer has invoked the handler function of my contract and if so, whether/how it failed.
Does anyone here know how to find the TX that handles the invoke on L2? Or any other ideas/tools that would help figuring out why the l1_handler never executed/failed?
First thing, transactions coming from L1 are regular transactions and therefore their hash can be computed the same way as invoke transactions. To have more information on this you can check the documentation here. Now this is helpful to understand the theory but not that much to actually compute the tx hash.
Here is the L1 event that send a message to StarkNet and this is where I get the needed information to compute the hash
Address 0xde29d060d45901fb19ed6c6e959eb22d8626708e
Name LogMessageToL2 (index_topic_1 address fromAddress, index_topic_2 uint256 toAddress, index_topic_3 uint256 selector, uint256[] payload, uint256 nonce)View Source
Topics
0 0x7d3450d4f5138e54dcb21a322312d50846ead7856426fb38778f8ef33aeccc01
1 0x779b989d7358acd6ce64237f16bbef09f35f6ecc
2 1524569076953457512425355396075576585145183562308719695739798372277154230742
3 1285101517810983806491589552491143496277809242732141897358598292095611420389
Data
payload :
1393428179030720295440092695193628168230707649901849797435563042612822742693
11819812303435348947619
0
nonce :
69106
Here is the script I use applied to your transaction (this may change in the future)
from starkware.cairo.lang.vm.crypto import pedersen_hash
from starkware.cairo.common.hash_state import compute_hash_on_elements
from starkware.crypto.signature.fast_pedersen_hash import pedersen_hash
from typing import List
def calculate_transaction_hash_common(
tx_hash_prefix,
version,
contract_address,
entry_point_selector,
calldata,
max_fee,
chain_id,
additional_data,
hash_function=pedersen_hash,
) -> int:
calldata_hash = compute_hash_on_elements(data=calldata, hash_func=hash_function)
data_to_hash = [
tx_hash_prefix,
version,
contract_address,
entry_point_selector,
calldata_hash,
max_fee,
chain_id,
*additional_data,
]
return compute_hash_on_elements(
data=data_to_hash,
hash_func=hash_function,
)
def tx_hash_from_message(
from_address: str, to_address: int, selector: int, nonce: int, payload: List[int]
) -> str:
int_hash = calculate_transaction_hash_common(
tx_hash_prefix=510926345461491391292786, # int.from_bytes(b"l1_handler", "big")
version=0,
contract_address=to_address,
entry_point_selector=selector,
calldata=[int(from_address, 16), *payload],
max_fee=0,
chain_id=1536727068981429685321, # StarknetChainId.TESTNET.value
additional_data=[nonce],
)
return hex(int_hash)
print(
tx_hash_from_message(
from_address="0x779b989d7358acd6ce64237f16bbef09f35f6ecc",
to_address=1524569076953457512425355396075576585145183562308719695739798372277154230742,
selector=1285101517810983806491589552491143496277809242732141897358598292095611420389,
nonce=69106,
payload=[
1393428179030720295440092695193628168230707649901849797435563042612822742693,
11819812303435348947619,
0,
],
)
)
This outputs 0x4433250847579c56b12822a16205e12410f6ad35d8cfc2d6ab011a250eae77f that we can find here which was properly executed.
I am developing game, which guesses number and get reward if they success.
This is summary of my program.
First, user send amount of sol and his guessing number.
Second, Program get random number and store user's sol to vault.
Third, Program make random number, if user is right, gives him reward.
Here, how can I check if the user sent correct amount of sol in program?
This is test code for calling program.
const result = await program.rpc.play(
new anchor.BN(40),
new anchor.BN(0),
new anchor.BN(20000000),
_nonce, {
accounts: {
vault: vaultPDA,
user: provider.wallet.publicKey, // User wallet
storage: storageAccount.publicKey,
systemProgram: systemProgram
},
instructions: [
SystemProgram.transfer({
fromPubkey: provider.wallet.publicKey,
toPubkey: vaultPDA,`enter code here`
lamports: 20000000`enter code here`
})
],
signers: [storageAccount]`enter code here`
}
)
The best solution would be to directly transfer the lamports inside of your program using a cross-program invocation, like this program: Cross-program invocation with unauthorized signer or writable account
Otherwise, from within your program, you can check the lamports on the AccountInfo passed, and make sure it's the proper number, similar to this example: https://solanacookbook.com/references/programs.html#transferring-lamports
The difference there is that you don't need to move the lamports.
I aim to use the google model adaptation to improve the speech to text accuracy, but these APIs are not well documented anywhere.
https://cloud.google.com/speech-to-text/docs/reference/rest/v1p1beta1/projects.locations.customClasses
I tried to create a custom class with 200000 values. And above that count, it is giving an error for the size of the payload and not for the entries count limit.
Where can I find the proper information/details of API and its restriction.
I am using the Ruby library to create custom classes.
Code to create the custom class .
cname = "TestClass"
items = 3_00_000.times.map{|e| Google::Cloud::Speech::V1p1beta1::CustomClass::ClassItem.new(value: Faker::Name.name) };
_class = Google::Cloud::Speech::V1p1beta1::CustomClass.new(name: cname, items: items);
request = Google::Cloud::Speech::V1p1beta1::CreateCustomClassRequest.new({custom_class: _class, parent: "projects/<projectID>/locations/global", custom_class_id: cname})
_klass = client.create_custom_class request
Getting the following error looks like it's being created/updated with the 10_000_000 values.
Google::Cloud::InvalidArgumentError: 3:Request payload size exceeds the limit: 10485760 bytes.. debug_error_string:{"created":"#1628230030.306827000","description":"Error received from peer ipv4:142.251.42.10:443","file":"src/core/lib/surface/call.cc","file_line":1067,"grpc_message":"Request payload size exceeds the limit: 10485760 bytes.","grpc_status":3}
Here's all the publicly available documentation about the API.
https://cloud.google.com/speech/docs/
https://cloud.google.com/speech-to-text/docs/release-notes
https://cloud.google.com/speech-to-text/pricing
https://cloud.google.com/speech-to-text/quotas
https://cloud.google.com/speech-to-text/sla
https://cloud.google.com/speech-to-text/docs/support#troubleshooting
https://cloud.google.com/speech-to-text/docs/best-practices
https://cloud.google.com/speech-to-text/docs/encoding
https://cloud.google.com/speech-to-text/docs/languages
https://cloud.google.com/speech-to-text/docs/apis
https://cloud.google.com/speech-to-text/docs/concepts
https://cloud.google.com/speech-to-text/docs/how-to
https://cloud.google.com/speech/docs/tutorials
[Disclaimer: I have published this question 3 weeks ago in biostars, with no answers yet. I really would like to get some ideas/discussion to find a solution, so I post also here.
biostars post link: https://www.biostars.org/p/447413/]
For one of my projects of my PhD, I would like to access all variants, found in ClinVar db, that are in the same genomic position as the variant in each row of the input GSVar file. The language constraint is Python.
Up to now I have used entrezpy module: entrezpy.esearch.esearcher. Please see more for entrezpy at: https://entrezpy.readthedocs.io/en/master/
From the entrezpy docs I have followed this guide to access UIDs using the genomic position of a variant: https://entrezpy.readthedocs.io/en/master/tutorials/esearch/esearch_uids.html in code:
# first get UIDs for clinvar records of the same position
# credits: credits: https://entrezpy.readthedocs.io/en/master/tutorials/esearch/esearch_uids.html
chr = variants["chr"].split("chr")[1]
start, end = str(variants["start"]), str(variants["end"])
es = entrezpy.esearch.esearcher.Esearcher('esearcher', self.entrez_email)
genomic_pos = chr + "[chr]" + " AND " + start + ":" + end # + "[chrpos37]"
entrez_query = es.inquire(
{'db': 'clinvar',
'term': genomic_pos,
'retmax': 100000,
'retstart': 0,
'rettype': 'uilist'}) # 'usehistory': False
entrez_uids = entrez_query.get_result().uids
Then I have used Entrez from BioPython to get the available ClinVar records:
# process each VariationArchive of each UID
handle = Entrez.efetch(db='clinvar', id=current_entrez_uids, rettype='vcv')
clinvar_records = {}
tree = ET.parse(handle)
root = tree.getroot()
This approach is working. However, I have two main drawbacks:
entrezpy fulls up my log file recording all interaction with Entrez making the log file too big to be read by the hospital collaborator, who is variant curator.
entrezpy function, entrez_query.get_result().uids, will return all UIDs retrieved so far from all the requests (say a request for each variant in GSvar), thus this space inefficient retrieval. That is the entrez_uids list will quickly grow a lot as I process all variants from a GSVar file. The simple solution that I have implenented is to check which UIDs are new from the current request and then keep only those for Entrez.fetch(). However, I still need to keep all seen UIDs, from previous variants in order to be able to know which is the new UIDs. I do this in code by:
# first snippet's first lines go here
entrez_uids = entrez_query.get_result().uids
current_entrez_uids = [uid for uid in entrez_uids if uid not in self.all_entrez_uids_gsvar_file]
self.all_entrez_uids_gsvar_file += current_entrez_uids
Does anyone have suggestion(s) on how to address these two presented drawbacks?
I am trying to make asynchronous kernel calls to my GPGPU using CUDAfy .NET.
When I pass values to the kernel and copy them back to the host, I do not always get the value I expect.
I have a structure Foo with a byte Bar:
[Cudafy]
public struct Foo {
public byte Bar;
}
And I have a kernel I want to call:
[Cudafy]
public static void simulation(GThread thread, Foo[] f)
{
f[0].Bar = 3;
thread.SyncThreads();
}
I have a single thread with streamID = 1 (I tried using multiple threads, and noticed the issue. Reducing to a single thread didn't seem to fix the issue though).
//allocate
streamID = 1;
count = 1;
gpu.CreateStream(streamID);
Foo[] sF = new Foo[count];
IntPtr hF = gpu.HostAllocate<Foo>(count);
Foo[] dF = gpu.Allocate<Foo>(sF);
while (true)
{
//set value
sF[0].Bar = 1;
byte begin = sF[0].Bar;
//host -> pinned
GPGPU.CopyOnHost<Foo>(sF, 0, hF, 0, count);
sF[0].Bar = 2;
lock (gpu)
{
//pinned -> device
gpu.CopyToDeviceAsync<Foo>(hF, 0, dF, 0, count, streamID);
//run
gpu.Launch().simulation(dF);
//device -> pinned
gpu.CopyFromDeviceAsync<Foo>(dF, 0, hF, 0, count, streamID);
}
//WAIT
gpu.SynchronizeStream(streamID);
//pinned -> host
GPGPU.CopyOnHost<Foo>(hF, 0, sF, 0, count);
byte end = sF[0].Bar;
}
//de-allocate
gpu.Free(dF);
gpu.HostFree(hF);
gpu.DestroyStream(streamID);
First I create a stream on the GPU.
I am creating a regular structure Foo array of size 1 (sF) and setting it's Bar value to 1. Then I create pinned memory on the host (hF) for Foo as well. I also create memory on the device for Foo (dF).
I initialize the structure's Bar value to 1 then I copy it to the pinned memory (As a check, I set the value to 2 for the structure after copying to pinned, you'll see why later). Then I use a lock to ensure I have full access to the GPU and I queue a copy to dF, a run for the kernel, and a copy from dF. At this point I don't know when this will all actually run on the GPU... so I can call SynchronizeStream to wait on the host until the device is done.
When it's done, I can copy the pinned memory (hF) to the shared memory (sF). When I get the value, it's usually a 3 (which was set on the device) or a 1 (which means either the value wasn't set in the kernel, or the new value wasn't copied to the pinned memory). I do know that the pinned memory is copied to the structure because the structure never has the value of 2.
Over many runs, a small percentage is runs results in something other than begin=1 and end=3. It would always be begin=1, end=1 and it happens about 5-10% of the time.
I have no idea why this happens. I know it generally highlights a race condition, but by calling the sync calls, I would expect the async calls to work in a predictable fashion.
Why would I be encountering this kind of issue with this code?
Thank you so much!
-Phil
I just figured out the issue that was occurring. While the launch was being done asynchronously... I didn't include the stream for the launch.
Changing my launch to be:
gpu.Launch(gridsize,blocksize,streamID).simulation(dF);
resolved the problem. It seems that the launches were occurring on stream 0 and the stream 1 and 2 were being synced. So sometimes the data gets set, sometimes it doesn't. A race condition.