How to change the content (value) of file "capacity-scheduler.xml" dynamically using REST APIs? - hadoop

I am trying to change the value of property "yarn.scheduler.capacity.maximum-am-resource-percent" at the time when job is running in order to provide more resource unit to AM container. My basic logic is, more resource unit to AM, can run more YarnChild, results good throughput.

Related

JMeter: How to assign weights/frequencies/smapleRates to ThreadGroups/"TestSets" dynamically from CSV file

I would like to create a JMeter test setup with important sampling and test variation parameters entirely controlled by csv-Files (i.e. not modifying the JMX-File). It should be run with maven.
Ideas is as follows:
sources.csv contains
sampleRate;file
40;samplesForController1.csv
30;samplesForController2.csv
5;samplesForController3.csv
...
SampleRate should determine, how often a certain number of tests (defined by parameters in the respective file) should be executed (relative to others).
How could this be achieved? Im asking for the first step here (make sure the files/testSamples are sampled/executed given the indicated sampleRate) as I think I can solve the second part (dealing with parameters in samplesForController1.csv etc.) by myself.
P.S.
I'm struggling with the options presented here: In Jmeter, I have to divide number of thread into the multiple http requests in different percentage but have to keep sequence remain same since
afaics, thread groups cannot be created on-the-fly/dynamically/progammatically
apparently, Throughput Controller needs to know its child element(s probablities) upfront (i.e. not created dynamically), otherweise, sampling is very odd (I could not get it working maintaining the desired sampleRate)
I did not try to integrate jmeter-plugins in the maven build thus far as my impression is, the available plugins/controllers also needs to know their child element upfront
It is possible to create thread groups programmatically, check out:
Five Ways To Launch a JMeter Test without Using the JMeter GUI
jmeter-from-code example project
jmeter-java-dsl project
You can use Switch Controller and use a function like __groovy() to generate the child element index, example implementation can be found in Running JMeter Samplers with Defined Percentage Probability article
It's not a problem to use JMeter Plugins with Maven, see Adding jar's to the /lib/ext directory documentation section for example setup

Working of onTrigger - nifi custom processor

I just started learning about the custom processor in nifi. I want to understand the specific case of working of onTrigger. I am doing some operations in onTrigger function using the property values which are defined in the nifi flow Processor interface.
Ex: Property value in the custom processor takes a string separated by ',' and in the onTrigger function I write a code which converts the string into an array of String and removes the additional white spaces.
My question is will this operation run every time a flowfile passes through the custom processor or will it be converted only once.
I tried going through the official development docs but could'nt find info on this
The Java code of a processor is compiled when you run a Maven build to produce the NAR file. The code is not compiled by NiFi itself.
You then deploy a NAR file to a NiFi instance by placing it in the lib directory, and then you use components from that NAR in your flow by adding them to the canvas.
Once a component is on the canvas and it is started, then the onTrigger method is called according to the scheduling strategy.
Whatever code is in onTrigger will run for every execution of the processor, so your code to read the property and split the value will run every time.
If the property supports expression language from flow files, then you need to run this code every time in onTrigger because the resulting value could be different for every flow file.
if the property does not support expression language from flow files, then you can instead use a method with #OnScheduled and process the property value into whatever you need, and store in a member variable of the processor, this way it only happens one time.

Avoid failed cases in jmeter

I am running jmeter to perform performance testing on a web application. The web application restricts the creation of the duplicate task. When I record that using blazemeter and run it with 100 threads, it shows that the task creation url is failed in the report, as it tries to create a task with the same name. How to avoid this?
In the majority of cases you simply cannot record and replay JMeter script without prior modifications as you need to perform correlation and parameterization.
In particular your case you need to identify the task id, most probably it is a parameter of a POST request and replace recorded hard-coded value with a JMeter Function which will generate a random value, i.e.
__Random() - for a random number in the given range
__RandomString() - for a random string from the given characters
__UUID() - creates a GUID-like structure with a very high chance of being unique, i.e. 123e4567-e89b-12d3-a456-426655440000

Adding elements at runtime and executing a jmeter test plan

I am trying to build a jmeter testplan, where all the test values are sent from a csv datafile.I want to add assertions(provided in the datafile) to my HTTP Request at runtime and execute the test. The reason behind doing this is to keep the plan flexible according to the number of assertions. In my case, the assertions are getting added at the runtime; however they fail to get executed. May I know what should be done to get the components added and executed in the same flow?
For example: A part of plan looks like:
XYZ
--HTTP Sampler
-- Response Assertion1
-- Response Assertion2
-- JSON Extractor
where XYZ -->keyword based transaction controller(reusable component)
Everytime I have a request of type XYZ, this chunk of components will get executed. In my case, I do not want to place anything such as Assertions, pre/post processors, extractors in the test plan already. I want to generate these components at run time and execute them (as per my test requisites).
Issue: The problem here is that I cant load the components programmatically and execute them in the same flow. The reason being, the compiler does not know beforehand what all components it needs to execute, so it bypasses the newly added components.
So, I need some alternative solution to execute this.
You can add Response Assertion (or multiple) with Pattern to test filled with a variable as ${testAssert1} and set the variable by default as empty, for example
Put in User Defined Variables name testAssert1 with empty value.
Your assertion(s) will pass until you on run time set the variable with a different value, for example using User Parameters Pre Processor.

Recording application using template

I have recorded my web application through template & just to confirm that load test result which i am getting is correct? Just by increasing No of users does it give proper results? Is it enough for load testing of web application?
First of all you need to ensure that your test does what it is supposed to be doing. Recorded tests can rarely be successfully replayed, so normally you should be acting as follows:
Add View Results Tree listener and run your test with 1 user. Inspect request and response details to verify your test steps.
Perform correlation and parametrization if required.
Correlation: the process of identifying and handling any dynamic parameters. Most often people use Regular Expression Extractor for it.
Parametrization: the process of making your test data driven. For example, if your application assumes multiple authenticated users you need to store the credentials somewhere. Most commonly used test element for this is CSV Data Set Config
Make your test realistic. Virtual users simulated by JMeter need to represent real users using real browsers as close as possible with all the related stuff: cookies, headers, cache, etc. See How To Make JMeter Behave More Like A Real Browser to learn how to configure JMeter to act closer to real users. Also real users need some time to "think" between operations so make sure you are using Timers to simulate this behaviour as well.
Only after you apply the above points you should add more virtual users. Again, run your test with 2-3 users and iterations to ensure your test funcitons as designed. Once you are happy with it you can increase the load, but don't overkill your server, increase the load gradually and check the impact of the increasing load on your application, i.e. how response time, throughput and number of errors change as you increase the load. The same is applicable for decreasing the load, don't turn it off at once, decrease the number of virtual users gradually.
Building a Web Test Plan
Building an Advanced Web Test Plan

Resources