How to set price for using the OCR in Chainlink? - chainlink

Where is the price per one request to an Offchain Aggregator price feed set?
Is it in:
the job .toml file via the Chainlink Node operator (type= "directrequest") in the form of minContractPaymentLinkJuels = 1000000000000000000
or in the "getBilling" section of the AccessControlledOffchainAggregator
or in the .env file of the node as "MINIMUM_CONTRACT_PAYMENT_LINK_JUELS"
or somhwere else?

To set up the job fee, add the next line to the job's .toml file:
minContractPaymentLinkJuels = <fee>
To set up fee on node globally, add the next line to the .env file:
MINIMUM_CONTRACT_PAYMENT_LINK_JUELS = <fee>

Related

Jmeter - Testing with 100 User while reading Links from CSV file

I just started using Jmeter recently.
What I want:
I want to run a test of 100 users by getting links from CSV file.
How I am doing:
I created a Test-Plan, Added Thread Group, CSV Data Config (Child to Thread group), HTTP Request.
Given Values:
HTTP Request Default: Url Address (Tried with both HTTP & without HTTP in protocol section)
Thread Group: User: 100
Loop: Forever
CSV Data Set Config: File Name (Full Path, the file is not in bin folder)
Variable Name: Path
Recycle on EOF: False
Stop Thread on EOF: True
HTTP Request: IP Address:
Path: ${Path}
CSV File:
Path
Link1
Link2
Link3
What I am getting: Well the test is executing but it executing all link only once (one User), it not going for 100 User
Note: I am running the TestPlan from Command Mode
Thanks for your Time
If you want each user to go through all links in the CSV file you need to amend Sharing Mode setting of the CSV Data Set Config to Current Thread
You can verify the behavior by adding __threadNum() function as request prefix/postfix

How do I make jmeter use the different hostname/port for different threads of same test plan

In my test scenario I have to test 2 urls with different host and port under same test plan. Is it possible to do so
You can make variable as property and send to jmeter script
Add to Test Plan In User define variables 2 rows:
baseUrl with Value ${__P(baseUrl,localhost)}
port with Value ${__P(port,8080)}
localhost and 8080 are default values you can change it
and when you execute add the values you want e.g.:
jmeterw.cmd ... -JbaseUrl=192.168.0.33 -Jport=80
Define your host and port combinations in a CSV endpoints.csv file like:
somehost,someport
someotherhost, someotherport
and put the CSV file to the "bin" folder of your JMeter installation
Add CSV Data Set Config to your test plan and configure it like:
Set HTTP Request sampler to use ${host} and ${port} variables defined via the CSV Data Set Config
That's it, on each iteration (or virtual user hit) the next line will be picked up from the endpoints.csv file.
See Using CSV DATA SET CONFIG article for more information on parameterising JMeter tests using CSV files.

Store the log output into a file with cmdenv-output-file

I need to recover the content of the show log module of Omnet++/Tkenv into a file, I added in the omnetpp.ini:
cmdenv-express-mode = false
cmdenv-output-file = log.txt
but I have two types of problems:
1) after the simulation, I did not find the "log.txt" If I do not create it
2) and when I created it before launching the simulation under ../omnetpp-4.6/log.txt also I find it empty
I used EV << to display the content of variables that I used, I need to resolve this problem in order to analyze the traffic so how can I do that please?
You have to start your simulation in Cmdenv mode. To do that go to Run | Run Configurations | select your configuration, then select Command line as User interface. The log file is created in simulations directory by default.

Hadoop Number of Reducers Configuration Options Priority

What are the priorities of the following 3 options for setting number of reduces? In other words, if all three are set, which one will be taken into account?
Option1:
setNumReduceTasks(2) within the application code
Option2:
-D mapreduce.job.reduces=2 as command line argument
Option3:
through $HADOOP_CONF_DIR/mapred-site.xml file
<property>
<name>mapreduce.job.reduces</name>
<value>2</value>
</property>
According to the Hadoop - The Definitive Guide
The -D option is used to set the configuration property with key color to the value
yellow. Options specified with -D take priority over properties from the configuration
files. This is very useful because you can put defaults into configuration files and then
override them with the -D option as needed. A common example of this is setting the
number of reducers for a MapReduce job via -D mapred.reduce.tasks=n. This will
override the number of reducers set on the cluster or set in any client-side configuration
files.
You have them racked in priority order - option 1 will override 2, and 2 will override 3. In other words Option 1 will be the one used by your job in this scenario
First Priority: Passing configuration parameters through command line (while submitting MR Application)
Second Priority: Setting configuration parameters in application code
Third Priority: It will read default parameters from multiple xml files such as core-site.xml, hadoop-env.sh, hdfs-site.xml, log4j.properties and mapred-site.xml

hadoop check if the path is valid and create if not

I have a simple MR job that needs to create a directory in hdfs based on the timestamp. I am having hard time finding the correct api (in hadoop 2.0.3 to find the status and create a directory if it doesn't exist). Can some one suggest the right way of doing it? here is the existing code:
FileSystem fileSystem = FileSystem.get(new Configuration());
Calendar c = Calendar.getInstance();
String basepath = "/dev/group/data/json/";
for ( Record record: records){
c.setTimeInMillis(record.timestamp );
Path path = new Path(basepath + c.get(Calendar.YEAR) + "/" + c.get(Calendar.MONTH));
// Check if the path is valid and create hdfs folder if not
FileStatus[] status = filesystem.???
context.write(key, new Text(mapper.writeValueAsString(record)));
}
Thx
mkdirs returns false if the folder creation fails, true if it succeeds. So just use that and then know that it didn't create it when it returns false.
Checking to see if it exists first doesn't really help at all because that's an extra operation to the NameNode. Also, you have to be worried about the contention across multiple jobs. Consider the following situation:
Mapper 1 checks to see if dir abc exists -- it doesn't
Mapper 2 checks to see if dir abc exists -- it doesn't
Mapper 1 tries to create dir abc -- it does
Mapper 2 tries to create dir abc -- it does't
So long story short, just use mkdirs because it's atomic and doesn't have the above problem, and also requires less work from the NameNode.

Resources