Why is our Chainlink node not catching any OracleRequest events from our Arbitrum Operator? Requests are never fulfilled / v2 jobs never executed - chainlink

We're trying to get our Chainlink AnyAPI stack to work with an Arbitrum Chainlink node. The same stack is being used on Ethereum, Polygon and Avalanche without issue.
Chainlink AnyAPI empowers some of the use cases of DSLA Protocol, a middleware for adding consumer protection capabilities to any monitorable marketplace (e.g. OpenSea) using peer-to-peer service level agreements (SLA).
Request Lifecycle
Here are the different steps involved in verifying that a SLA contract has been respected, using Chainlink:
A user calls the verification function on the Messenger smart contract that implements Chainlink.
The Messenger smart contract sends the request the PreCoordinator smart contract .
The PreCoordinator forwards the request to the Oracles defined in a Service Agreement (proxy of Oracles).
Upon receiving LINK, each Oracle sends the request to the Chainlink node, by emitting an OracleRequest event with the id of the job to be executed.
The Chainlink node captures such event, and executes the corresponding job.
Once the job is executed, the Chainlink node calls the fulfillOracleRequest2 function to return the result from the external adapter to the PreCoordinator.
The PreCoordinator takes the mean of all Oracle results and ultimately registers the SLI in the messenger.
The SLA is verified (real vs goal) by comparing the SLI (real) and the SLO (goal).
Fulfillment Issue
It appears our node doesn't pick the OracleRequest event of our PreCoordinator v0.6 / Operator v0.7 setup.
The request is never fulfilled so we've been wondering if our job id syntax is correct in the PreCoordinator service agreements (among other things) and if, perhaps, there's a peculiar configuration to apply to the node / v2 job specs.
AnyAPI Stack
PreCoordinator.sol: a proxy for using multiple Chainlink oracles using service agreements
v0.6 Chainlink contracts:
import '#chainlink/contracts/src/v0.6/ChainlinkClient.sol';
import '#chainlink/contracts/src/v0.6/LinkTokenReceiver.sol';
import '#chainlink/contracts/src/v0.6/Median.sol';
import '#chainlink/contracts/src/v0.6/vendor/Ownable.sol';
import '#chainlink/contracts/src/v0.6/vendor/SafeMathChainlink.sol';
v0.7 Operator.sol* contract
Migrated from the v0.6 Oracle.sol smart contract, in an attempt to solve this issue.
An v1.9.0 Arbitrum Chainlink node with a v2 job specification:
type = "directrequest"
schemaVersion = 1
name = "StakingParametricRequest"
forwardingAllowed = false
maxTaskDuration = "0s"
contractAddress = "0x6Dc1147ca16C020579642D90042CeA252474fD67"
minContractPaymentLinkJuels = "0"
observationSource = """
decode_log [type=ethabidecodelog
abi="OracleRequest(bytes32 indexed specId, address requester, bytes32 requestId, uint256 payment, address callbackAddr, bytes4 callbackFunctionId, uint256 cancelExpiration, uint256 dataVersion, bytes data)"
data="$(jobRun.logData)"
topics="$(jobRun.logTopics)"]
decode_cbor [type=cborparse data="$(decode_log.data)"]
fetch [type=bridge name="staking-parametric" requestData="{\\"id\\": $(jobSpec.externalJobID), \\"data\\": { \\"sla_monitoring_start\\": $(decode_cbor.sla_monitoring_start), \\"sla_monitoring_end\\": $(decode_cbor.sla_monitoring_end), \\"sla_address\\": $(decode_cbor.sla_address), \\"network_name\\": $(decode_cbor.network_name)}}"]
parse [type=jsonparse path="data,result" data="$(fetch)"]
encode_large [type="ethabiencode"
abi="(bytes32 requestId, bytes _data)"
data="{\\"requestId\\": $(decode_log.requestId), \\"_data\\": $(parse)}"
]
encode_tx [type=ethabiencode
abi="fulfillOracleRequest2(bytes32 requestId, uint256 payment, address callbackAddress, bytes4 callbackFunctionId, uint256 expiration, bytes calldata data)"
data="{\\"requestId\\": $(decode_log.requestId), \\"payment\\": $(decode_log.payment), \\"callbackAddress\\": $(decode_log.callbackAddr), \\"callbackFunctionId\\": $(decode_log.callbackFunctionId), \\"expiration\\": $(decode_log.cancelExpiration), \\"data\\": $(encode_large)}"
]
submit_tx [type=ethtx to="0x6Dc1147ca16C020579642D90042CeA252474fD67" data="$(encode_tx)"]
decode_log -> decode_cbor -> fetch -> parse -> encode_large -> encode_tx -> submit_tx
"""
The following environment variables:
ETH_CHAIN_ID: "42161"
LINK_CONTRACT_ADDRESS: "0xf97f4df75117a78c1A5a0DBb814Af92458539FB4"
ETH_URL: "wss://arbitrum-mainnet.s.chainbase.online/v1/[redacted]"
ETH_SECONDARY_URLS: "https://morning-dimensional-morning.arbitrum-mainnet.quiknode.pro/[redacted]/"
Has anybody experienced a similar issue?
Thanks a tons for your help!
Wilhem

After a quick glance I can say that the encode_tx task has a wrong abi value. fulfillOracleRequest2 must have data as bytes not bytes32.
From: https://github.com/smartcontractkit/chainlink/blob/develop/contracts/src/v0.7/Operator.sol#L208

Related

How can I pass JSON data to a webhook Chainlink job?

I am trying to get the "Pipeline Input" to somehow be passed to an external adapter via the $(jobRun.requestBody) pipeline variable and then parsed by a jsonparse task and then sent via a fetch task. I am not sure what format the input should be in when running a webhook job on a Chainlink node. I keep getting this and other errors no matter what I try:
data: key requestBody (segment 1 in keypath jobRun.requestBody): keypath not found
This is what I am seeing on the Chainlink Admin UI:
Here is the closest thing I have found in the documentation:
- https://docs.chain.link/chainlink-nodes/oracle-jobs/job-types/webhook
Here is the job definition if useful:
type = "webhook"
schemaVersion = 1
name = "account-balance-webhook"
forwardingAllowed = false
observationSource = """
parse_request [type="jsonparse" path="data,address" data="$(jobRun.requestBody)"]
fetch [type=bridge name="test" requestData="{\\"id\\": \\"0\\", \\"data\\": { \\"address\\": \\"$(parse_request)\\"}}"]
parse [type=jsonparse path="data,free" data="$(fetch)"]
parse_request -> fetch -> parse
"""
I am running Chainlink in a Docker container with this image: smartcontract/chainlink:1.11.0-root
Some background: I am working on developing an external adapter and want to be able to easily and quickly test.
We use the following webhook job to quickly verify there is no syntax errors, etc. with the bridge to an EA.
type = "webhook"
schemaVersion = 1
name = "[WH-username] cbor0-v0"
observationSource = """
fetch [type="bridge" name="bridge_name" requestData="{ \\"id\\": $(jobSpec.externalJobID), \\"input1\\": \\"value1\\" }"]
parse [type=jsonparse path="data,results" data="$(fetch)"]
fetch -> parse
"""
In general, its best to quickly test an EA directly through curl GET/POST. If the curl works, then a bridge will work as long as you named the bridge correctly in the job-spec.toml

How to close and transfer all SOL balance programatically without reserving the rent?

I have a Solana account A which is a fee payer, and another Solana account B with some SOL balance, I want to close and transfer all account B's SOL balance to another account C. How do I do that?
I saw the following error:
SendTransactionError: failed to send transaction: Transaction simulation failed: Transaction results in an account (1) without insufficient funds for rent
The code:
const lamports = await connection.getBalance(account_b_public_key);
const transaction = new solana.Transaction();
transaction.add(solana.SystemProgram.transfer({
fromPubkey: account_b_public_key,
toPubkey: account_c_public_key,
lamports: lamports,
}));
transaction.feePayer = account_a_public_key;
const signature = await solana.sendAndConfirmTransaction(connection, transaction, [account_a_signer, account_b_signer]);
I know something is missing there, but I cannot find an interface to close account B and send all SOL balances to another account.

Checking result of an L1 -> L2 message/invoke in Starknet

I've written a couple contracts for L1 (Ethereum) and L2 (Starknet) and have them communicate here.
I can see that L1 sent the message I'm expecting, see this TX on etherscan. The last message in there however never executed on my L2 contract. I'm trying to figure out whether the L2 Sequencer has invoked the handler function of my contract and if so, whether/how it failed.
Does anyone here know how to find the TX that handles the invoke on L2? Or any other ideas/tools that would help figuring out why the l1_handler never executed/failed?
First thing, transactions coming from L1 are regular transactions and therefore their hash can be computed the same way as invoke transactions. To have more information on this you can check the documentation here. Now this is helpful to understand the theory but not that much to actually compute the tx hash.
Here is the L1 event that send a message to StarkNet and this is where I get the needed information to compute the hash
Address 0xde29d060d45901fb19ed6c6e959eb22d8626708e
Name LogMessageToL2 (index_topic_1 address fromAddress, index_topic_2 uint256 toAddress, index_topic_3 uint256 selector, uint256[] payload, uint256 nonce)View Source
Topics
0 0x7d3450d4f5138e54dcb21a322312d50846ead7856426fb38778f8ef33aeccc01
1 0x779b989d7358acd6ce64237f16bbef09f35f6ecc
2 1524569076953457512425355396075576585145183562308719695739798372277154230742
3 1285101517810983806491589552491143496277809242732141897358598292095611420389
Data
payload :
1393428179030720295440092695193628168230707649901849797435563042612822742693
11819812303435348947619
0
nonce :
69106
Here is the script I use applied to your transaction (this may change in the future)
from starkware.cairo.lang.vm.crypto import pedersen_hash
from starkware.cairo.common.hash_state import compute_hash_on_elements
from starkware.crypto.signature.fast_pedersen_hash import pedersen_hash
from typing import List
def calculate_transaction_hash_common(
tx_hash_prefix,
version,
contract_address,
entry_point_selector,
calldata,
max_fee,
chain_id,
additional_data,
hash_function=pedersen_hash,
) -> int:
calldata_hash = compute_hash_on_elements(data=calldata, hash_func=hash_function)
data_to_hash = [
tx_hash_prefix,
version,
contract_address,
entry_point_selector,
calldata_hash,
max_fee,
chain_id,
*additional_data,
]
return compute_hash_on_elements(
data=data_to_hash,
hash_func=hash_function,
)
def tx_hash_from_message(
from_address: str, to_address: int, selector: int, nonce: int, payload: List[int]
) -> str:
int_hash = calculate_transaction_hash_common(
tx_hash_prefix=510926345461491391292786, # int.from_bytes(b"l1_handler", "big")
version=0,
contract_address=to_address,
entry_point_selector=selector,
calldata=[int(from_address, 16), *payload],
max_fee=0,
chain_id=1536727068981429685321, # StarknetChainId.TESTNET.value
additional_data=[nonce],
)
return hex(int_hash)
print(
tx_hash_from_message(
from_address="0x779b989d7358acd6ce64237f16bbef09f35f6ecc",
to_address=1524569076953457512425355396075576585145183562308719695739798372277154230742,
selector=1285101517810983806491589552491143496277809242732141897358598292095611420389,
nonce=69106,
payload=[
1393428179030720295440092695193628168230707649901849797435563042612822742693,
11819812303435348947619,
0,
],
)
)
This outputs 0x4433250847579c56b12822a16205e12410f6ad35d8cfc2d6ab011a250eae77f that we can find here which was properly executed.

method: "hardhat_impersonateAccount" - What happens when you call this method with an address that doesn't exist?

async function impersonateAccount(acctAddress) {
await hre.network.provider.request({
method: "hardhat_impersonateAccount",
params: [acctAddress],
});
return await ethers.getSigner(acctAddress);
}
When forking the blockchain locally on Hardhat, the function above allows developers to impersonate the address passed as argument to it.
So you can create transactions as if you're the owner of the account.
What happens when forking the mainnet, and you pass an address that does not exist on the mainnet as an argument?
Would it throw an error?
Does it create the account for you locally and give you access?
It will create the account locally with a balance of 0 ETH.
I tried this with the Ropsten address 0xFD391b604E9456c0Ec4aC13Cc881FbAF68868eB2, which currently has 210 testnet ETH and does not exist on the mainnet.
With your code example it will return a valid signer, and if you check the balance of the signer's address it will have 0 ETH.

nodejs + grpc-node server much slower than REST

I have implemented 2 services A and B, where A can talk to B via both gRPC (using grpc-node with Mali) and pure HTTP REST calls.
The request size is negligible.
The response size is 1000 items that look like this:
{
"productId": "product-0",
"description": "some-text",
"price": {
"currency": "GBP",
"value": "12.99"
},
"createdAt": "2020-07-12T18:03:46.443Z"
}
Both A and B are deployed in GKE as services, and they communicate over the internal network using kube-proxy.
What I discovered is that the REST version is a lot faster than gRPC. The REST call's p99 sits at < 1s, and the gRPC's p99 can go over 30s.
Details
Node version and OS: node:14.7.0-alpine3.12
Dependencies:
"google-protobuf": "^3.12.4",
"grpc": "^1.24.3",
"mali": "^0.21.0",
I have even created client-side-TCP-pooling by setting the gRPC option grpc.use_local_subchannel_pool=1, but this did not seem to help.
The problem seems to be the server side, as I can see that from the log that the grpc lib's call.startBatch call took many seconds to send data of size ~51kb. This is way slower than the REST version.
I also checked the CPU and network of the services are healthy. The REST version could send > 2mbps, whereas the gRPC version only manages ~150kbps.
Running netstat on service B (in gRPC) shows a number of ESTABLISHED TCP connections (as expected because of TCP pooling).
My suspicion is that the grpc-core C++ code is somehow less optimal than REST, but I have no proof.
Any ideas where I should look at next? Thanks for any helps
Update 1
Here're some benchmarks:
Setup
Blazemeter --REST--> services A --gRPC/REST--> service B
request body (both lags) is negligible
service A is a node service + Koa
service B has 3 options:
grpc-node: node with grpc-node
gRPC + Go: Go implementation of the same gRPC service
REST + Koa: node with Koa
Blazemeter --> service A: response payload is negligible, and the same for all tests
serivce A --> service B: the gRPC/REST response payload is 1000 of ProductPrice:
message ProductPrice {
string product_id = 1; // Hard coded to "product-x", x in [0 ... 999]
string description = 2; // Hard coded to random string, length = 10
Money price = 3;
google.protobuf.Timestamp created_at = 4; // Hard coded
}
message Money {
Currency currency = 1; // Hard coded to GBP
string value = 2; // Hard coded to "12.99"
}
enum Currency {
CURRENCY_UNKNOWN = 0;
GBP = 1;
}
The services are deployed to Kubernetes in GCP,
instance type: n1-highcpu-4
5 pods each service
2 CPU, 1 GB memory each pod
kube-proxy using cluster IP (not going via internet) (I've also test headless with clusterIP: None, which gave similar results)
Load
50rps
Results
service B using grpc-node
service B using Go gRPC
service B using REST with Koa
Network IO
Observations
gRPC + Go is roughly on par with REST (I thought gRPC would be faster)
grpc-node is 4x slower than REST
Network isn't the bottleneck

Resources