In my build.gradle script, I have an Exec task to start a multithreaded process, and I would like to limit the number of threads in the process to the value of max-workers, which can be specified on the command line with --max-workers, or in gradle.properties with org.gradle.workers.max, and possibly other ways. How should I get this value to then pass on to my process?
I have already tried reading the org.gradle.workers.max property, but that property doesn't exist if it is not set by the user and instead max-workers is defined by some other means.
You may be able to access this value using API project.getGradle() .getStartParameter() .getMaxWorkerCount().
Related
Setting 'common' properties for child tasks is not working
The SCDF version I'm using is 2.9.6.
I want to make CTR A-B-C, each of tasks does follows:
A : sql select on some source DB
B : process DB data that A got
C : sql insert on some target DB
Simplest way to make this work seems to define shared work directory folder Path "some_work_directory", and pass it as application properties to A, B, C. Under {some_work_directory}, I just store each of task result as file, like select.result, process.result, insert.result, and access them consequently. If there is no precedent data, I could assume something went wrong, and make tasks exit with 1.
================
I tried with a composed task instance QWER, with two task from same application "global" named as A, B. This simple application prints out test.value application property to console, which is "test" in default when no other properties given.
If I tried to set test.value in global tab on SCDF launch builder, it is interpreted as app.*.test.value in composed task's log. However, SCDF logs on child task A, B does not catch this configuration from parent. Both of them fail to resolve input given at launch time.
If I tried to set test.value as row in launch builder, and pass any value to A, B like I did when task is not composed one, this even fails. I know this is not 'global' that I need, it seems that CTR is not working correctly with SCDF launch builder.
The only workaround I found is manually setting app.QWER.A.test.value=AAAAA and app.QWER.B.test.value=BBBBB in launch freetext. This way, input is converted to app.QWER-A.app.global4.test.value=AAAAA, app.QWER-B.app.global4.test.value=BBBBB, and print well.
I understand that, by this way, I could set detailed configurations for each of child task at launch time. However, If I just want to set some 'global' that tasks in one CTR instance would share, there seems to be no feasible way.
Am I missing something? Thanks for any information in advance.
CTR will orchestrate the execution of a collection of tasks. There is no implicit data transfer between tasks. If you want the data from A to be the input to B and then output of B becomes the input of C you can create one Task / Batch application that have readers and writers connected by a processor OR you can create a stream application for B and use JDBC source and sink for A and C.
Im using Jmeter 5.5. Im trying to find a way to set the starting value of a counter programmatically. Essentially what i need to do is
Start test
Read an int value from a file <-- setUpThreadGroup
Set the start value of the Counter element to this value.<-- setUpThreadGroup
Iterate through the test/threads, incrementing this as a shared variable. <--Threadgroup
Write the new value to the file. <-- Teardown
I've tried using props.put(),(__P,__setProperty), vars.put, System.setProperty, all with no success.
Is it possible to set the starting value of a counter via code? It always seems to start with 0.
If this isnt possible, is it possible to create a shared variable that can be used across threads that increments safely to ensure no duplicate variable values will ever be used?
I don't think you even need set up and teardown thread groups, you can do something like:
Read the counter from the file via __FileToString() function like:
${__FileToString(counter.txt,,)}
Once all iterations are complete you can write the new value into the file using __StringToFile() function
${__StringToFile(counter.txt,${counter},false,)}
Demo:
More information: How to Use a Counter in a JMeter Test
I'm concerned with work of CSV Data Set Config along JMeter rules set with scoping rules and execution order.
For CSV Data Set Config it is said "Lines are read at the start of each test iteration.". At first I thought that talks about threads, then I've read Use jmeter to test multiple Websites where config is put inside loop controller and lines are read each loop iteration. I've tested with now 5.1.1 and it works. But if I put config at root of test plan, then in will read new line only each thread iteration. Can I expect such behaviour based on docs only w/out try-and-error? I cannot see how it flows from scoping+exec order+docs on csv config element. Am I missing something?
I would appreciate some ideas why such factual behaviour is convenient and why functionality was implemented this way.
P.S. how can I read one line cvs to vars at start of test and then stop running that config to save CPU time? In 2.x version there was VariablesFromCSV config for that...
The Thread Group has an implicit Loop Controller inside it:
the next line from CSV will be read as soon as LoopIterationListener.iterationStart() event occurs, no matter of origin
It is safe to use CSV Data Set Config as it doesn't keep the whole file in the memory, it reads the next line only when the aforementioned iterationStart() event occurs. However it keeps an open file handle. If you do have really a lot of RAM and not enough file handles you can read the file into memory at the beginning of the test using i.e. setUp Thread Group and JSR223 Sampler with the following code
SampleResult.setIgnore()
new File('/path/to/csv/file').readLines().eachWithIndex { line, index ->
props.put('line_' + (index + 1), line)
}
once done you will be able to refer the first line using __P() function as ${__P(line_1,)}, second line as ${__P(line_2,)}, etc.
I have a Daemon I am trying to start but I would like to set a few variables in the daemon when starting it. Here is the script I am using to control my daemons locates in RAILSAPP/script/daemon
#!/usr/bin/env ruby
require 'rubygems'
require 'daemons'
ENV["APP_ROOT"] ||= File.expand_path("#{File.dirname(__FILE__)}/..")
ENV["RAILS_ENV_PATH"] ||= "#{ENV["APP_ROOT"]}/config/environment.rb"
script = "#{ENV["APP_ROOT"]}/daemons/#{ARGV[1]}"
Daemons.run(script, dir_mode: :normal, dir: "#{ENV["APP_ROOT"]}/tmp/pids")
When I start this daemon I would like to pass a variable to it like a reference to an active record so I can base the daemon's initial run off of it.
If you want to fetch a specific ActiveRecord object, you can either pass just the id, or the classname + id as an additional parameter on the commandline. As you're already using ARGV[1] for the script name, you could pass it as ARGV[2] and something like Product_123 that you then parse via split, and do a Product.find(123) to get the actual record.
Another approach would be to put the object information into a queue like memcached or redis, and have the daemon fetch the information out of the queue. That would keep your daemon startup a bit simpler, and would allow you to queue up multiple records for the daemon to process. (Something just processing a single record would probably be better written as a script anyway.)
The other concern I have about your script is using ENV["APP_ROOT"]. Does that really need to go in the environment? What if you have a second daemon? It seems that it would be better as a local variable, and if you need it in the daemon, you can always get it relative to where the daemon's file is located anyway.
I have this in my initializer:
Delayed::Job.const_set( "MAX_ATTEMPTS", 1 )
However, my jobs are still re-running after failure, seemingly completely ignoring this setting.
What might be going on?
more info
Here's what I'm observing: jobs with a populated "last error" field and an "attempts" number of more than 1 (10+).
I've discovered I was reading the old/wrong wiki. The correct way to set this is
Delayed::Worker.max_attempts = 1
Check your dbms table "delayed_jobs" for records (jobs) that still exist after the job "fails". The job will be re-run if the record is still there. -- If it shows that the "attempts" is non-zero then you know that your constant setting isn't working right.
Another guess is that the job's "failure," for some reason, is not being caught by DelayedJob. -- In that case, the "attempts" would still be at 0.
Debug by examining the delayed_job/lib/delayed/job.rb file. Esp the self.workoff method when one of your jobs "fail"
Added #John, I don't use MAX_ATTEMPTS. To debug, look in the gem to see where it is used. Sounds like the problem is that the job is being handled in the normal way rather than limiting attempts to 1. Use the debugger or a logging stmt to ensure that your MAX_ATTEMPTS setting is getting through.
Remember that the DelayedJobs jobs runner is not a full Rails program. So it could be that your initializer setting is not being run. Look into the script you're using to run the jobs runner.