I am wondering if there is something I could use to create a simulator using JMeter that would pick the users from my "user list" based on some kind of pattern. In fact, even simpler: imagine I have the users from 0 to N. Some of them are active, some of them are not. I would like to have some simulated users that are active during certain period (say, hour), then they go dormant, others become active etc. So, out of total N users I would have something like X unique active users per hour, Y unique active users per day, Z unique active users per week etc.
I think I could write some kind of generator like this but I am wondering if something already exists - as JMeter plugin or just a library/class that I could use.
See the following test elements which can help you to implement scenario requested:
Ultimate Thread Group - to control virtual users arrival rate and time to hold the load
Constant Throughput Timer - to control virtual users activity in "requests per minute" which can be converted to "requests per second" or "requests per day" by simple arithmetic calculations
Provide uniqueness of virtual users via:
CSV Data Set Config configuration element or __CSVRead() function - for pre-defined users list
__Random or __RandomString function for dynamic unique parameters.
Related
When we create a Jmeter script through Blazemeter/third party script recorder and there are some insert/update and delete functions on UI involve in records. Just want to know when we run same JMeter script with 100 users, Do those new records get inserted/update/delete in database as well ? If yes, then what should be remaining 99 users data if there are unifications on UI.
When you record the user action it results into hard-coded values so if you add foo line in the UI it gets added to the database.
When you replay the test with 1 user depending on your application implementation
either another foo line will get added to the database
or you will get an error regarding this entry is present already
When you run the same test with 100 users the result will be the same, to wit:
either you will have 100 identical new/updated entries
or you will have 100 errors
So I would suggest doing some parameterization of your tests so each thread (virtual user) would operate its own unique data, like:
have a CSV file with credentials for 100 users which can be read using CSV Data Set Config
when you add an entry you can also consider adding an unique prefix or postfix to this like:
current virtual user number via __threadNum() function
current iteration via ${__jm__Thread Group__idx} pre-defined variable
current timestamp via __time() function
unique GUID-like structure via __UUID() function
etc.
Given following scenario:
A lambda receives an event via SQS
The lambda receives a uuid pointing to an entity.
The lambda may fail with an error
SQS will retrial that particular entity several times
The lambda will be called with different entities thousand of times
Right now we monitor a custom error-count metric like myService.errorType.
Which gives us an exact number of how many times an error occurred - independent from a specific entity: If an entity can't be processed like 100 times, then the metric value will be 100.
What I'd like to have, though, is a distinct metric based on the UUID.
Example:
entity with id 123 fails 10 times
entity with id 456 succeeds
entity with id 789 fails 20 times
Then I'd like to have a metric with the value of 2 - because the processes failed for two entities only (and not for 30, as it would be reported right now).
While searching for a solution I found the possibility of using tags. But as the docs point out they are not meant for such a use-case:
Tags shouldn’t originate from unbounded sources, such as epoch timestamps, user IDs, or request IDs. Doing so may infinitely increase the number of metrics for your organization and impact your billing.
So are there any other possibilities to achieve my goals?
I've solved it now by verifying the status via code and by adding tags to the metrics:
occurrence:first
subsequent
This way I can filter in my dashboard for occurrence:first only.
To make sure things are clear, you have a metric called myService.errorType with a tag entity. This metric is a counter that will increase every time an entity is in error. You will then use this metric query:
sum:myService.errorType{*} by {entity}
When you speak about UUID, it seems that the cardinality is small (here you show 3). Which means that every hour you will have small amount of UUID available. In that case, adding UUID to the metric tags is not as critical as user ID, timestamp, etc. which have a limitless number of options.
I would invite you to add this uuid tag, and check the cardinality in the metric summary page to ensure it works.
Then to get the number of UUID concerned by errors, you can use something like:
count_not_null(sum:myService.errorType{*} by {uuid})
Finally, as an alternative, if the cardinality of UUID can go through the roof, I would invite you to work with logs or work with Christopher's solution which seems to limit the cardinality increase as well.
I have an input account (never share) in which the user types a parameter for each month, I want that into aggregate members of Period dimension, for example on YearTotal, the value will be the weighted average between two other accounts representing the cost and the quantity.
With the account properties I can rollup my account in addition or as simple average between months, obviously in this way I get wrong data in both cases.
Anyone know a solution to my question?
Thanks a lot,
Daniele
Not sure exactly what you are asking. But I assume the following in my answer:
data entry for user on account Parameter (from the context, I think it is a price)
data entry for user on level0 Period, i.e. the months
you want Essbase to show the Parameter value as typed in at the month level (Jan .. Dec)
you want Essbase to show Costs / Quantity for Q1/2/3/4 and the YearTotal
the Account and Period dimension are of density: dense
You did not specify if you are also reporting on YTD values and how you have implemented this in Essbase. I assume you do, but the preferred solution depends on how you have implemented this, so I take the "safe" solution here:
solution 1
This is the most straightforward solution:
Implement a "parameter_inp" account on which the user keys in the data. Set the account to "never consolidate".
Create a new "parameter" account, dynamic calc, and give it the formula "Costs/Quantity;".
Refer to "parameter" in your reports, and to "parameter_inp" for user entry
solution 2 - alternative
If you have a lot of these parameters, you'll end up with a system making it unpleasant for data entry and reporting for the end-users. To solve it using data entry and reporting on the same "parameter" account, you need to tune your implementation for Quarter and YearTotal calculation, including the YTD calculation. I see no way of getting this correct if you are using DTS.
This is the way to go forward:
Make use of a new dimension called "View", data entry on PER (= periodic), additional dynamic calc member "YTD", density: dense, place it after Period (so Account, Period, View)
Add a UDA to the "parameter", for example "WA"
Set custom dynamic calculations on Quarter and YearTotal level, something like: IF (#ISUDA("WA")) THEN ELSIF <check on FLOW/BALANCE> ... logic for regular aggregation of FLOW and BALANCE items hereby overriding Essbase's native time logic)
Set custom dynamic calculations for YTD (overiding DTS), and make an exception for UDA "WA"
below query we can get get current logons count:
SELECT VALUE AS current_logon_count FROM v$sysstat WHERE name = 'logons current';
also below query will return current sessions utilization :
SELECT resource_name,
current_utilization current_count
FROM v$resource_limit
WHERE resource_name IN ('sessions');
what is the difference between current logons count and current sessions utilization?
[Updated]
Sorry, my conclusion was too hasty after checking of oracle docs.
Now, I've tested these parameters on an oracle instance, and indeed, current_utilization:sessions does not show the maximum number of logons during the instance run time.
What it shows is well explained here.
In short:
v$sysstat "current logons" show current number of sessions in v$session.
And according to the link above, v$session holds only USER and BACKGROUND sessions.
There is another type of session: RECURSIVE.
v$resource_limit's current_utilization:session reflects all three types of sessions, so in most cases, these numbers are going to be different.
So, both parameters count the sessions currently in the instance, but they do it differently.
[Initial answer]
According to the description of v$sysstat metrics current logons:
This metric represents the current number of logons.
And according to the description of v$resource_limit, and further
SESSIONS specifies the maximum number of sessions that can be created in the system.
So, the difference is between the current and the maximum number of users.
First of all I beg to request you, please do not treat this as duplicate.
I have seen all the threads for this issue but none was of my type.
I am developing an online registration system using JBOSS 6 and Oracle 11g. I want to give every registrant a unique form number sequentially.
For this, I think oracle's sequence_name.nextval for a primary key field is best for inserting a unique yet sequential number and for retrieving the same I would use sequence_name.currval. Till this I hope, it's ok.
But will this ensure parity if two or more concurrent users submits the web form simultaneously? (I mean will there be any overlap of interchange of value among the concurrent users?)
More precisely, is it session dependent?
Let me give two hypothetical situations so that matter becomes clearer.
Say there are two users, user1 and user2 trying to register at the same time sitting at Newyork and Paris respectively. The max(form_no) is say 100 before they click the submit button. Now, in the code I have written say
insert into member(....) values(seq_form_no.nextval,....).
Now since the two users will invoke the same query sitting at two different terminals will they get their own sequential id or user1 will get user2's or vice-versa? Hope I made the issue clear. See, the sequence will be unique, I know, but I want to associate the ids inserted respectively.
Thanks in advance.
I'm not sure to understand. But simply said, a SENQUENCE ensure uniqueness of the generated number among concurrent transactions/connections. Unless if the sequence was created with the CYCLE option, from within a transaction, you can rely on a strictly monotonically increasing (resp. decreasing) numbering. But not from the absence of gap (probably what you where expecting when talking about "sequential numbers").
Worth mentioning that sequence numbers never go backward. When someone acquires a value, it is "consumed" from the sequence and will never get back inside (beside CYCLE) -- even if you rollback the current transaction.
From the doc (emphasis mine):
When a sequence number is generated, the sequence is incremented, independent of the transaction committing or rolling back. If two users concurrently increment the same sequence, then the sequence numbers each user acquires may have gaps, because sequence numbers are being generated by the other user. One user can never acquire the sequence number generated by another user. After a sequence value is generated by one user, that user can continue to access that value regardless of whether the sequence is incremented by another user.
My JSP is a little bit ... "rusty", but something like that will work as expected:
<sql:update dataSource="${ds}" var="result">
INSERT INTO member(....) values(seq_form_no.nextval,....);
</sql:update>
<sql:query dataSource="${ds}" var="last_inserted_member_id">
SELECT seq_form_no.currval FROM DUAL;
</sql:query>