How to check if a particular thread is finished execution in Jmeter - jmeter

I want a function which takes thread name and returns if it is running or not. I have tried isAlive function of java but it did not work.

As an exception I can give you a hint: if you look into ThreadGroup class source code you will see something like:
// List of active threads
private final ConcurrentHashMap<JMeterThread, Thread> allThreads = new ConcurrentHashMap<>();
So all active threads live in allThreads map of the ThreadGroup.
Knowing this you can attempt to find the thread by name in this map, if it is not null - the thread is running and vice versa.
Example code:
def threadName = 'Thread Group 1-1'
def threadGroup = ctx.getThreadGroup()
def field = org.apache.commons.lang3.reflect.FieldUtils.getDeclaredField(threadGroup.getClass(), 'allThreads', true)
def allThreads = field.get(threadGroup)
def thread1 = allThreads.keySet().find { key -> key.getThreadName().equals(threadName) }
if (thread1 != null) {
log.info('Thread ' + threadName + ' is running')
} else {
log.info('Thread ' + threadName + ' is not running')
}
Demo:
More information: Apache Groovy - Why and How You Should Use It

Related

How can I interpolate a random octet into a CIDR block in Terraform?

I have the following random_test.tf Terraform file, which I've successfully initialized:
resource "random_integer" "octet" {
min = 0
max = 255
}
variable "base_cidr_block" {
description = "Class A CIDR block in RFC 1918 range"
default = "10.0.0.0/8"
}
provider "null" {
base_cidr_block = "10.${random_integer.octet.result}.0.0/16"
}
output "ip_block" {
value = var.base_cidr_block
}
I'm using the null provider as a placeholder to test defining a 10.0.0.0/16 CIDR block with a random second octet. However, base_cidr_block is always 10.0.0.0/8 even though I'm expecting it to be assigned something like 10.100.0.0/16, which would then be shown on standard output as ip_block. Instead, I always get the default:
$ terraform plan
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# random_integer.octet will be created
+ resource "random_integer" "octet" {
+ id = (known after apply)
+ max = 255
+ min = 0
+ result = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ ip_block = "10.0.0.0/8"
Running terraform apply then always sends ip_block = "10.0.0.0/8" to the console. What am I doing wrong?
Here's what I've come up with, although I may not understand the intent.
First, I've created a module. I'm using the random_integer, and setting a keeper:
variable "netname" {
default = "default"
}
variable "subnet" {
default = "10.0.0.0/8"
}
resource "random_integer" "octet" {
min = 0
max = 255
keepers = {
netname = var.netname
}
}
output "rand" {
value = random_integer.octet.result
}
output "random-subnet" {
value = "${cidrsubnet("${var.subnet}", 8, random_integer.octet.result)}"
}
Next I call the module, passing in my keeper, and optionally the subnet:
module "get-subnet-1" {
source = "./module/"
netname = "subnet-1"
}
output "get-subnet-1" {
value = module.get-subnet-1.random-subnet
}
module "get-subnet-2" {
source = "./module/"
netname = "subnet-2"
}
output "get-subnet-2" {
value = module.get-subnet-2.random-subnet
}
Finally, my output:
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
get-subnet-1 = 10.2.0.0/16
get-subnet-2 = 10.6.0.0/16

Assert if the JMeter variable has incremented from the previous iteration

In JMeter I'm doing a get request to check activeMQ queue size.
This request is made inside a Loop Controller that runs lets say 4 times.
In each iteration I'm extracting the value of outQueueCount into a JMeter variable.
How to do an assertion to verify that the current count value is greater than the previous iteration?
if you have 2 JMeter variables with numbers you can check the difference between them with __intSum function
${__intSum(${outQueueCount},-${currentCount},difference)}
difference will be a new JMeter variable with the result, then you can check if difference is 1 for example:
${__jexl3("${difference}" == "1")}
Add JSR223 Assertion as a child of the request which returns this outQueueCount
Put the following code into "Script" area:
def previousValue = vars.get('previousValue')
if (previousValue == null) {
vars.put('previousValue', vars.get('outQueueCount'))
}
else {
long previous = previousValue as long
long current = vars.get('outQueueCount') as long
if (previous >= current) {
AssertionResult.setFailure(true)
AssertionResult.setFailureMessage('Queue size not incremented: previous value: ' + previous + ', current value: ' + current)
}
}
If previous value will be greater than or equal to the new one - you will get an error message and the sampler will get failed:
More information: Scripting JMeter Assertions in Groovy - A Tutorial
1) Add a Counter as a child of your loop controller just before your request with the below configurations:
Start: 1
Increment: 1
Reference Name: Counter
2) Add a BeanShell PostProcessor as a child of your request after your regular expression extractor with the below script in the script area:
String Counter = vars.get("Counter");
vars.put("MyVar_" + Counter, vars.get("MyVar"));// MyVar is the name of your regular expression extractor.
3) Add a BeanShell Assertion after the above BeanShell PostProcessor with the below script in the script area:
int Counter = Integer.parseInt(vars.get("Counter"));
if(Counter > 1){
int Prev = Counter - 1;
int CurrentCount = Integer.parseInt(vars.get("MyVar_" + Counter));
int PrevCount = Integer.parseInt(vars.get("MyVar_" + Prev));
if(CurrentCount < PrevCount){
Failure = true;
FailureMessage = "CurrentCount = " + CurrentCount + " is less than " + "PrevCount = " + PrevCount;}}

Spock to provide temporary files for the Gradle task method within expect: and where: blocks

I would like to use Spock's Data Driven Testing.
My task has a method with a File argument to process. Result is an Array with processed lines. If there is nothing valid to process mentioned array will have 0 size. I used a JUnit rule for creating a temporary folder and I suspect it's the problem.
How can I solve this ?
class H2JSpec extends Specification {
#Rule
TemporaryFolder temporaryFolder
#Shared
private ArrayList<File> tempFiles = []
def "let's build the mappings for template"() {
setup:
Project project = ProjectBuilder.builder().build()
H2JTask task = project.task('h2j', type: H2JTask)
def inputs = ["""#dsfs
""",
"""#this file defines the mapping of h files declarations into java errorcodes: autoenum=off prefix=ec_ class=test.framework.base.MsgErrorCodes
"""]
tempFiles.add(temporaryFolder.newFile('1.txt'))
tempFiles.add(temporaryFolder.newFile('2.txt'))
tempFiles[0].withWriter { it << inputs[0] }
tempFiles[1].withWriter { it << inputs[1] }
expect:
task.prepareCommandList(a).size() == b
where:
a || b
tempFiles[0] || 0
tempFiles[1] || 1
}
}
And the result is the java.io.FileNotFoundException.
To be honest I'd eliminate use of Rule at all and rewrite the test in the following way:
#Grab(group='org.spockframework', module='spock-core', version='1.0-groovy-2.4')
import spock.lang.*
class H2JSpec extends Specification {
def "let's build the mappings for template"() {
setup:
def task = new A()
when:
def f = File.createTempFile('aaa', 'bbb')
f.write(a)
then:
task.prepareCommandList(f).size() == b
where:
a || b
"""#dsfs
""" || 3
"""#this file defines the mapping of h files declarations into java errorcodes: autoenum=off prefix=ec_ class=test.framework.base.MsgErrorCodes
""" || 2
}
}
class A {
List<String> prepareCommandList(File f) {
f.readLines()
}
}

Using par map to increase performance

Below code runs a comparison of users and writes to file. I've removed some code to make it as concise as possible but speed is an issue also in this code :
import scala.collection.JavaConversions._
object writedata {
def getDistance(str1: String, str2: String) = {
val zipped = str1.zip(str2)
val numberOfEqualSequences = zipped.count(_ == ('1', '1')) * 2
val p = zipped.count(_ == ('1', '1')).toFloat * 2
val q = zipped.count(_ == ('1', '0')).toFloat * 2
val r = zipped.count(_ == ('0', '1')).toFloat * 2
val s = zipped.count(_ == ('0', '0')).toFloat * 2
(q + r) / (p + q + r)
} //> getDistance: (str1: String, str2: String)Float
case class UserObj(id: String, nCoordinate: String)
val userList = new java.util.ArrayList[UserObj] //> userList : java.util.ArrayList[writedata.UserObj] = []
for (a <- 1 to 100) {
userList.add(new UserObj("2", "101010"))
}
def using[A <: { def close(): Unit }, B](param: A)(f: A => B): B =
try { f(param) } finally { param.close() } //> using: [A <: AnyRef{def close(): Unit}, B](param: A)(f: A => B)B
def appendToFile(fileName: String, textData: String) =
using(new java.io.FileWriter(fileName, true)) {
fileWriter =>
using(new java.io.PrintWriter(fileWriter)) {
printWriter => printWriter.println(textData)
}
} //> appendToFile: (fileName: String, textData: String)Unit
var counter = 0; //> counter : Int = 0
for (xUser <- userList.par) {
userList.par.map(yUser => {
if (!xUser.id.isEmpty && !yUser.id.isEmpty)
synchronized {
appendToFile("c:\\data-files\\test.txt", getDistance(xUser.nCoordinate , yUser.nCoordinate).toString)
}
})
}
}
The above code was previously an imperative solution, so the .par functionality was within an inner and outer loop. I'm attempting to convert it to a more functional implementation while also taking advantage of Scala's parallel collections framework.
In this example the data set size is 10 but in the code im working on
the size is 8000 which translates to 64'000'000 comparisons. I'm
using a synchronized block so that multiple threads are not writing
to same file at same time. A performance improvment im considering
is populating a separate collection within the inner loop ( userList.par.map(yUser => {)
and then writing that collection out to seperate file.
Are there other methods I can use to improve performance. So that I can
handle a List that contains 8000 items instead of above example of 100 ?
I'm not sure if you removed too much code for clarity, but from what I can see, there is absolutely nothing that can run in parallel since the only thing you are doing is writing to a file.
EDIT:
One thing that you should do is to move the getDistance(...) computation before the synchronized call to appendToFile, otherwise your parallelized code ends up being sequential.
Instead of calling a synchronized appendToFile, I would call appendToFile in a non-synchronized way, but have each call to that method add the new line to some synchronized queue. Then I would have another thread that flushes that queue to disk periodically. But then you would also need to add something to make sure that the queue is also flushed when all computations are done. So that could get complicated...
Alternatively, you could also keep your code and simply drop the synchronization around the call to appendToFile. It seems that println itself is synchronized. However, that would be risky since println is not officially synchronized and it could change in future versions.

how to read immutable data structures from file in scala

I have a data structure made of Jobs each containing a set of Tasks. Both Job and Task data are defined in files like these:
jobs.txt:
JA
JB
JC
tasks.txt:
JB T2
JA T1
JC T1
JA T3
JA T2
JB T1
The process of creating objects is the following:
- read each job, create it and store it by id
- read task, retrieve job by id, create task, store task in the job
Once the files are read this data structure is never modified. So I would like that tasks within jobs would be stored in an immutable set. But I don't know how to do it in an efficient way. (Note: the immutable map storing jobs may be left immutable)
Here is a simplified version of the code:
class Task(val id: String)
class Job(val id: String) {
val tasks = collection.mutable.Set[Task]() // This sholud be immutable
}
val jobs = collection.mutable.Map[String, Job]() // This is ok to be mutable
// read jobs
for (line <- io.Source.fromFile("jobs.txt").getLines) {
val job = new Job(line.trim)
jobs += (job.id -> job)
}
// read tasks
for (line <- io.Source.fromFile("tasks.txt").getLines) {
val tokens = line.split("\t")
val job = jobs(tokens(0).trim)
val task = new Task(job.id + "." + tokens(1).trim)
job.tasks += task
}
Thanks in advance for every suggestion!
The most efficient way to do this would be to read everything into mutable structures and then convert to immutable ones at the end, but this might require a lot of redundant coding for classes with a lot of fields. So instead, consider using the same pattern that the underlying collection uses: a job with a new task is a new job.
Here's an example that doesn't even bother reading the jobs list--it infers it from the task list. (This is an example that works under 2.7.x; recent versions of 2.8 use "Source.fromPath" instead of "Source.fromFile".)
object Example {
class Task(val id: String) {
override def toString = id
}
class Job(val id: String, val tasks: Set[Task]) {
def this(id0: String, old: Option[Job], taskID: String) = {
this(id0 , old.getOrElse(EmptyJob).tasks + new Task(taskID))
}
override def toString = id+" does "+tasks.toString
}
object EmptyJob extends Job("",Set.empty[Task]) { }
def read(fname: String):Map[String,Job] = {
val map = new scala.collection.mutable.HashMap[String,Job]()
scala.io.Source.fromFile(fname).getLines.foreach(line => {
line.split("\t") match {
case Array(j,t) => {
val jobID = j.trim
val taskID = t.trim
map += (jobID -> new Job(jobID,map.get(jobID),taskID))
}
case _ => /* Handle error? */
}
})
new scala.collection.immutable.HashMap() ++ map
}
}
scala> Example.read("tasks.txt")
res0: Map[String,Example.Job] = Map(JA -> JA does Set(T1, T3, T2), JB -> JB does Set(T2, T1), JC -> JC does Set(T1))
An alternate approach would read the job list (creating jobs as new Job(jobID,Set.empty[Task])), and then handle the error condition of when the task list contained an entry that wasn't in the job list. (You would still need to update the job list map every time you read in a new task.)
I did a feel changes for it to run on Scala 2.8 (mostly, fromPath instead of fromFile, and () after getLines). It may be using a few Scala 2.8 features, most notably groupBy. Probably toSet as well, but that one is easy to adapt on 2.7.
I don't have the files to test it, but I changed this stuff from val to def, and the type signatures, at least, match.
class Task(val id: String)
class Job(val id: String, val tasks: Set[Task])
// read tasks
val tasks = (
for {
line <- io.Source.fromPath("tasks.txt").getLines().toStream
tokens = line.split("\t")
jobId = tokens(0).trim
task = new Task(jobId + "." + tokens(1).trim)
} yield jobId -> task
).groupBy(_._1).map { case (key, value) => key -> value.map(_._2).toSet }
// read jobs
val jobs = Map() ++ (
for {
line <- io.Source.fromPath("jobs.txt").getLines()
job = new Job(line.trim, tasks(line.trim))
} yield job.id -> job
)
You could always delay the object creation until you have all the data read in from the file, like:
case class Task(id: String)
case class Job(id: String, tasks: Set[Task])
import scala.collection.mutable.{Map,ListBuffer}
val jobIds = Map[String, ListBuffer[String]]()
// read jobs
for (line <- io.Source.fromFile("jobs.txt").getLines) {
val job = line.trim
jobIds += (job.id -> new ListBuffer[String]())
}
// read tasks
for (line <- io.Source.fromFile("tasks.txt").getLines) {
val tokens = line.split("\t")
val job = tokens(0).trim
val task = job.id + "." + tokens(1).trim
jobIds(job) += task
}
// create objects
val jobs = jobIds.map { j =>
Job(j._1, Set() ++ j._2.map { Task(_) })
}
To deal with more fields, you could (with some effort) make a mutable version of your immutable classes, used for building. Then, convert as needed:
case class Task(id: String)
case class Job(val id: String, val tasks: Set[Task])
object Job {
class MutableJob {
var id: String = ""
var tasks = collection.mutable.Set[Task]()
def immutable = Job(id, Set() ++ tasks)
}
def mutable(id: String) = {
val ret = new MutableJob
ret.id = id
ret
}
}
val mutableJobs = collection.mutable.Map[String, Job.MutableJob]()
// read jobs
for (line <- io.Source.fromFile("jobs.txt").getLines) {
val job = Job.mutable(line.trim)
jobs += (job.id -> job)
}
// read tasks
for (line <- io.Source.fromFile("tasks.txt").getLines) {
val tokens = line.split("\t")
val job = jobs(tokens(0).trim)
val task = Task(job.id + "." + tokens(1).trim)
job.tasks += task
}
val jobs = for ((k,v) <- mutableJobs) yield (k, v.immutable)
One option here is to have some mutable but transient configurer class along the lines of the MutableMap above but then pass this through in some immutable form to your actual class:
val jobs: immutable.Map[String, Job] = {
val mJobs = readMutableJobs
immutable.Map(mJobs.toSeq: _*)
}
Then of course you can implement readMutableJobs along the lines you have already coded

Resources