Is there a possibility to determine/restrict smart-contract addresses that can use the Chainlink Oracle Node I created? - chainlink

Is there a possibility to determine/restrict smart-contract addresses that can use the Chainlink Oracle Node I created? How can I ensure that only Smart Contract addresses that I set use my Oracle node?

For ease of convenience, you can restrict/whitelist by requesting smart contract addresses in the form of an array within the job-spec.toml found within the Chainlink GUI (localhost:6688). Below is from the docs [here].
requesters = [
"0xaaaa1F8ee20f5565510B84f9353F1E333E753B7a",
"0xbbbb70F0e81C6F3430dfdC9fa02fB22BdD818C4e"
]

Related

what does VRFCoordinatorV2Interface(vrfCoordinator) mean in chain link documentation

I know that VRFCoordinatorV2Interface is an interface and we put the respective chain coordinator address in it. what does it signifies and how to visualise it.
OR
What will be the outcome when we put an address in a interface.
The Chainlink VRF Coordinator is a contract that is deployed to a blockchain that will check the randomness of each random number returned from a random node.
By putting "its address in an interface" you can programmatically interact with it from your smart contract. In other words, the function of your smart contract can call some other function from the VRF Coordinator function, like for example createSubscription().

Grafana/Prometheus visualizing multiple ips as query

I want to have a graph where all recent IPs that requested my webserver get shown as total request count. Is something like this doable? Can I add a query and remove it afterwards via Prometheus?
Technically, yes. You will need to:
Expose some metric (probably a counter) in your server - say, requests_count, with a label; say, ip
Whenever you receive a request, inc the metric with the label set to the requester IP
In Grafana, graph the metric, likely summing it by the IP address to handle the case where you have several horizontally scaled servers handling requests sum(your_prometheus_namespace_requests_count) by (ip)
Set the Legend of the graph in Grafana to {{ ip }} to 'name' each line after the IP address it represents
However, every different label value a metric has causes a whole new metric to exist in the Prometheus time-series database; you can think of a metric like requests_count{ip="192.168.0.1"}=1 to be somewhat similar to requests_count_ip_192_168_0_1{}=1 in terms of how it consumes memory. Each metric instance currently being held in the Prometheus TSDB head takes something on the order of 3kB to exist. What that means is that if you're handling millions of requests, you're going to be swamping Prometheus' memory with gigabytes of data just from this one metric alone. A more detailed explanation about this issue exists in this other answer: https://stackoverflow.com/a/69167162/511258
With that in mind, this approach would make sense if you know for a fact you expect a small volume of IP addresses to connect (maybe on an internal intranet, or a client you distribute to a small number of known clients), but if you are planning to deploy to the web this would allow a very easy way for people to (unknowingly, most likely) crash your monitoring systems.
You may want to investigate an alternative -- for example, Grafana is capable of ingesting data from some common log aggregation platforms, so perhaps you can do some structured (e.g. JSON) logging, hold that in e.g. Elasticsearch, and then create a graph from the data held within that.

How to know that a Chainlink node doesn't manipulate my data

I'm trying to find some information about the security of using Chainlink as a tool for external API calls. The following is my scenario
My smart contract initiates API call using Chainlink with some parameters
Node picks up the request, does external call
Node receives response from API
Node writes result back to my smart contract
My smart contract finishes the execution
How can I validate that the Chainlink node does not change any of the parameters in step 2-4? Is there some mathematical validation like we have with the blockchain that I can confirm?
The questions comes up because of regulatory requirement that I have and it would be nice if I could simply link to some proof that data cannot be manipulated by a Chainlink node or if manipulation is possible, what could I do to validate the data was not manipulated and/or lower the chance?

How to create a variable in IIB which has scope for each single flow?

I need to create a variable in IIB flow which has to be available through out the flow. I have gone through the variables creation in documentation. As per my understanding, I should create a SHARED variable in ESQL module. But in documentation its mentioned as "Subsequent messages can access the data left by a previous message." which I didn't understand.
Could anyone please suggest how to create a variable which should have scope only for that flow(only per each request/instance)?
For example if I have to capture total value of some elements in payload and store calculated value in the created variable which I can use across all the nodes throughout the flow .
The Environment tree structure can be used for your use case:
The environment tree differs from the local environment tree in that a single instance of it is maintained throughout the message flow.
When the message flow processing is complete, the Environment tree is discarded.

How to communicate with external system

I'm trying to write a logic (js script) to communicate with external system. As far as understand, logic will be executed on all endorsing peer.
In this case, how can I avoid duplicate operation to external system ? For example, how to increment a value in external database ? If I write a logic to increment the value in js, I think the value will be incremented by all endorsing peer.
I'll appreciate any comment.
Firstly, currently the only way you can interact with external systems is using the experimental post API. This allows your Transaction Processor function to HTTP POST data to an external system and then to process the response.
Documentation here:
https://hyperledger.github.io/composer/integrating/call-out.html
You are correct in stating that if you have 4 peers, then the chain code container for each peer will run your logic, so you'd expect to see 4 calls to your HTTP service. This is required because each peer node is independent and Fabric must achieve consensus across the peers.
The external functions should therefore (ideally) be side-effect free "pure" functions (idempotent), meaning that for a given set of input parameters you always get the same set of output results.
Clearly a function that returns an incrementing integer doesn't fit this description! You probably need to rethink how you are structuring your problem to make it compatible with a decentralised blockchain-based approach.

Resources