I started to read famous Martin Fowler book (Patterns of Enterprise Application Architecture)
I have to mention that I am reading the book translated into my native language so it might be a reason of my misunderstanding.
I found their definitions (back translation into English):
Response time - amount of time to process some external request
Latency - minimal amount of time before getting any response.
For me it is the same. Could you please highlight the difference?
One way of looking at this is to say that transport latency + processing time = response time.
Transport latency is the time it takes for a request/response to be transmitted to/from the processing component. Then you need to add the time it takes to process the request.
As an example, say that 5 people try to print a single sheet of paper at the same time, and the printer takes 10 seconds to process (print) each sheet.
The person whose print request is processed first sees a latency of 0 seconds and a processing time of 10 seconds - so a response time of 10 seconds.
Whereas the person whose print request is processed last sees a latency of 40 seconds (the 4 people before him) and a processing time of 10 seconds - so a response time of 50 seconds.
As Martin Kleppman says in his book Designing Data Intensive Applications:
Latency is the duration that a request is waiting to be handled - during which it is latent, awaiting service. Used for diagnostic purposes ex: Latency spikes
Response time is the time between a client sending a request and receiving a response. It is the sum of round trip latency and service time. It is used to describe the performance of application.
This article is a good read on the difference, and is best summarized with this simple equation,
Latency + Processing Time = Response Time
where
Latency = the time the message is in transit between two points (e.g. on the network, passing through gateways, etc.)
Processing time = the time it takes for the message to be processed (e.g. translation between formats, enriched, or whatever)
Response time = the sum of these.
If processing time is reasonably short, which in well designed systems is the case, then for practical purposes response time and latency could be the same in terms of perceived passage of time. That said, to be precise, use the defined terms and don't confuse or conflate the two.
I differentiate this using below example,
A package has been sent from A-B-C where
A-B took 10 sec, B (processing) took 5 sec, B-C took 10 sec
Latency = (10 + 10) sec = 20 sec
Response time = (10 + 5 + 10) sec = 25 sec
Latency
The time from the source sending a packet to the destination receiving it
Latency is the time it takes for a message, or a packet, to travel from its point of origin to the point of destination. That is a simple and useful definition, but it often hides a lot of useful information — every system contains multiple sources, or components, contributing to the overall time it takes for a message to be delivered, and it is important to understand what these components are and what dictates their performance.
Let’s take a closer look at some common contributing components for a typical router on the Internet, which is responsible for relaying a message between the client and the server:
Propagation delay
Amount of time required for a message to travel from the sender to receiver, which is a function of distance over speed with which the signal propagates.
Transmission delay
Amount of time required to push all the packet’s bits into the link, which is a function of the packet’s length and data rate of the link.
Processing delay
Amount of time required to process the packet header, check for bit-level errors, and determine the packet’s destination.
Queuing delay
Amount of time the packet is waiting in the queue until it can be processed.
The total latency between the client and the server is the sum of all the delays just listed
Response Time
Total time taken between the packet to send and receive the packet from the receiver
Q : Could you please highlite the difference?
Let me start using a know-how from professionals in ITU-T ( former CCITT ), who have for decades spent many thousands of man*years of efforts on the highest levels of professional experience, and have developed an accurate and responsible methodology for measuring both:
What have the industry-standards adopted for coping with this ?
Since early years of international industry standards ( well, as far as somewhere deep in 60-ies ), these industry-professionals have created concept of testing complex systems in a repeatable and re-inspectable manner.
System-under-Test (SuT), inter-connected and inter-acting across a mediation service mezzo-system
SuT-component-[A]
|
+------------------------------------------------------------------------------------------------------------[A]-interface-A.0
| +------------------------------------------------------------------------------------------------[A]-interface-A.1
| |
| | SuT-component-[B]
| | |
| | +-------------------------[B]-interface-B.1
| | | +---------[B]-interface-B.0
| | ???????????????? | |
| | ? mezzo-system ? | |
+-----------+ ???????????????? +---------------+
| | ~~~~~~~~~~~~~~~~~~~~~~~~ ??? ... ... ??? ~~~~~~~~~~~~~~~~~~~~~~~~ | |
| | ~~<channel [A] to ???>~~ ??? ... ... ??? ~~<channel ??? to [B]>~~ | |
| | ~~~~~~~~~~~~~~~~~~~~~~~~ ??? ... ... ??? ~~~~~~~~~~~~~~~~~~~~~~~~ | |
+-----------+ ???????????????? +---------------+
|
No matter how formal this methodology may seem, it brings both clarity and exactness, when formulating ( the same when designing, testing and validating ) the requirements closely and explicitly related to SuT-components, SuT-interfaces, SuT-channels and also constraints for interactions across exo-system(s), including limitations of responses to an appearance of any external ( typically adverse ) noise/disturbing events.
At the end, and to the benefit of clarity, all parts of the intended SuT-behaviour could be declared against a set of unambiguously defined and documented REFERENCE_POINT(s), for which the standard defines and documents all properties.
A Rule of Thumb :
The LATENCY, most often expressed as a TRANSPORT-LATENCY ( between a pair of REFERENCE_POINTs ) is related to a duration of a trivial / primitive event-propagation across some sort of channel(s), where event-processing does not transform the content of the propagated-event. ( Ref. memory-access latency - does not re-process the data, but just delivers it, taking some time to "make it" )
The PROCESSING is meant as that kind of transforming an event, in some remarkable manner inside a SuT-component.
The RESPONSE TIME ( observed on REFERENCE_POINT(s) of the same SuT-component ) is meant as a resulting duration of some kind of rather complex, End-to-End transaction processing, that is neither a trivial TRANSPORT LATENCY across a channel, nor a simplistic in-SuT-component PROCESSING, but a some sort of composition of several ( potentially many ) such mutually interacting steps, working along the chain of causality ( adding random stimuli, where needed, for representing noise/errors disturbances ). ( Ref. database-engine response-times creep with growing workloads, right due to increased concurrent use of some processing resources, that are needed for such requested information internal retrieval, internal re-processing and for final delivery re-processing, before delivering resulting "answer" to the requesting counter-party )
|
| SuT_[A.0]: REFERENCE_POINT: receives an { external | internal } <sourceEvent>
| /
| / _SuT-[A]-<sourceEvent>-processing ( in SuT-[A] ) DURATION
| / /
| / / _an known-<channel>-transport-LATENCY from REFERENCE_POINT SuT-[A.1] to <mezzo-system> exosystem ( not a part of SuT, yet a part of the ecosystem, in which the SuT has to operate )
| / / /
| / / / _a mezzo-system-(sum of all)-unknown-{ transport-LATENCY | processing-DURATION } duration(s)
| / / / /
| / / / / _SuT_[B.1]: REFERENCE_POINT: receives a propagated <sourceEvent>
| / / / / /
| / / / / / _SuT_[B.0]: REFERENCE_POINT: delivers a result == a re-processed <sourceEvent>
| / / / / | / | /
|/ /| /................ / |/ |/
o<_________>o ~~< chnl from [A.1] >~~? ??? ... ... ??? ?~~<chnl to [B.1]>~~~~~~? o<______________>o
| |\ \ \
| | \ \ \_SuT-[B]-<propagated<sourceEvent>>-processing ( in SuT-[B] ) DURATION
| | \ \
| | \_SuT_[A.1]: REFERENCE_POINT: receives \_an known-<channel>-transport-LATENCY from <mezzo-system to REFERENCE_POINT SuT_[B.1]
| |
| | | |
o<--------->o-----SuT-test( A.0:A.1 ) | |
| | | |
| | o<-------------->o---SuT-test( B.1:B.0 )
| | | |
| o<----may-test( A.1:B.1 )-------------------------------------------->o |
| | exo-system that is outside of your domain of control, | |
| indirectly, using REFERENCE_POINT(s) that you control |
| |
| |
o<-----SuT-End-to-End-test( A.0:B.0 )------------------------------------------------------------->o
| |
Using this ITU-T / CCITT methodology, an example of a well defined RESPONSE TIME test would be a test of completing a transaction, that will measure a net duration between delivering a source-event onto REFERENCE_POINT [A.0] ( entering SuT-component-[A] ) and waiting here until the whole SuT delivers an answer from any remote part(s) ( like a delivery from [A]-to-[B], plus a processing inside a SuT-component-[B] and an answer delivery from [B]-back-to-[A] ) until an intended response is received back on a given REFERENCE_POINT ( be it the same one [A.0] or another, purpose-specific one [A.37] ).
Being as explicit as possible saves potential future mis-understanding ( which the international industry standards fought to avoid since ever ).
So a requirement expressed like:
1) a RESPONSE_TIME( A.0:A.37 ) must be under 125 [ms]
2) a net TRANSPORT LATENCY( A.1:B.1 ) ought exceed 30 [ms] in less than 0.1% cases per BAU
are clear and sound ( and easy to measure ) and everybody interested can interpret both the SuT-setup and the test-results.
Meeting these unambiguous requirements qualify a such defined SuT-behaviour to safely become compliant with an intended set of behaviours, or let professionals to cheaply detect, document and disqualify those, who do not.
Related
I am currently trying to set up a splunk search query in a dashboard that checks a specific time interval. The job I am trying to set it up with runs three times a day. Once at 6am, once at 12:20pm, and once at 16:20(4:20pm). Currently the query just searches for the latest time and sets the background as to whether it received an error or not, but the users wanted the three times it runs per day to be displayed seperately so now I need to set up an interval of time for each of the three panels to display and I have tried a lot of things with no luck(I am new to splunk so I have been just randomly trying different syntax).
I have tried using a search command |search Time>6:00:00 Time<7:00:00 and also tried using other commands that happen before the stats command that gets the latest time with no luck and I'm just stuck at this point and have no clue what to try.
I have my index at the top here but don't think its necessary to show.
| rex field=_raw ".+EVENT:\s(?\S+)\s.+STATUS:\s(?\S+)\s.+JOB:\s(?\S+)"
| stats latest(_time) as Time by status
| eval Time=strftime(Time, "%H:%M:%S") stats Time>6:00:00 Time<7:00:00
| sort 1 - Time
| table Time status
| append [| makeresults | eval Time="06:10:00"]
| eval range = case(status="FAILURE", "severe", status="SUCCESS", "low", 1==1, "guarded")
| head 1
i was having the same issue as you (where as the solutions on here and elsewhere were not working), however this below ended up working for me:
(adding this, to properly format / strip the hours)
| eval date_hour=strftime(_time, "%H")
and my full working search (each day, between 6am and 11pm , prior 25 days):
index=mymts earliest=-25d | eval date_hour=strftime(_time, "%H") | search date_hour>=6 date_hour<=23 host="172.17.172.1" "/netmap/*"
In the load test the following scenarios are included
users viewing video, listening to audio, viewing pdf, ppt,etc
100 and 200 concurrent users with these scenarios what is the minimum bandwidth(mbps) for these scenarios?
Normally you should have some form of NFR or SLA stating something like:
user with mobile phone connected to 3G network should have response time not more than 2 seconds
So you need to determine the slowest supported network type and simulate users connected to this networks type accessing your application. You can use Duration Assertion to automatically fail the requests which response time exceeds acceptable thresholds.
Here are the most popular throttled data presets which can be useful:
Bandwidth | cps
GPRS | 21888
3G | 2688000
4G | 19200000
WIFI 802.11a/g | 6912000
ADSL | 1024000
100 Mb LAN | 12800000
Gigabit Lan | 128000000
For a JMeter load test, I'd like to replay what we call a 'playbook', which has a form similar to this:
offset ms | request
--------------------------
0 | http://localhost/request1
7 | http://localhost/request2
12 | http://localhost/request3
25 | http://localhost/request4
... | ...
Where '0' is the start time of the test, and each request should be fired exactly x milliseconds after that, as given in the first column, irrespective of how long the single requests take.
What I want to avoid is the regular way JMeter works, where each thread basically fires one request after the other.
Background: We already have a tool which creates this sort of playbook, and it is a very realistic way of simulating user behavior. We are now evaluating if we can use JMeter to execute them.
In JMeter this is similar to Think time,
You can add after each request similar to Think time a Test Action after each request and under it a Timer as a Uniform Random Timer with Random = 0 and in your case:
After first request 7000 Constant Delay Offset
After second request 5000 Constant Delay Offset
After third request 13000 Constant Delay Offset
I want to use Cucumber to test my application which takes snapshots of external websites and logs changes.
I already tested my models separatly using RSpec and now want to make integration tests with Cucumber.
For mocking the website requests I use VCR.
My tests usually follow a similar pattern:
1. Given I have a certain website content (I do this using VCR cassettes)
2. When I take a snapshot of the website
3. Then there should be 1 "new"-snapshot and 1 "new"-log messages
Depending if the content of the website changes, a "new"-snapshot should be created and a "new"-log message should be created.
If the content stays the same, only a "old"-log message should be created.
This means, that the the application's behaviour depends on the current existing snapshots.
This is why I would like to run the different scenarios without resetting the DB after each row.
Scenario Outline: new, new, same, same, new
Given website with state <website_state_1>
When I take a snapshot
Then there should be <1> "new"-snapshot and <1> "old"-log messages and <1> "new"-log messages
Examples:
| state | snapshot_new | logmessages_old | logmessages_new |
| VCR_1 | 1 | 0 | 1 |
| VCR_2 | 2 | 0 | 2 |
| VCR_3 | 2 | 1 | 2 |
| VCR_4 | 2 | 2 | 2 |
| VCR_5 | 3 | 2 | 3 |
However, the DB is resetted after each scenario is run.
And I think that scenario outline was never intended to be used like this. Scenarios should be independent from each other, right?
Am I doing something wrong trying to solve my problem in this way?
Can/should scenario outline be used for that or is there another elegant way to do this?
J.
Each line in the Scenario Outline Examples table should be considered one individual Scenario. Scenarios should be independent from each other.
If you need a scenario to depend on the system being in a certain state, you'll need to set that state in the Given.
I want to monitor IPMI System Event Log(SEL) in real time. What I want that is that whenever a event is generated in SEL , automatically a mail alert should be generated .
One way for me to achieve this is that I can write a script and schedule it in cron. The script will run 3 or 4 times a day so whenever a new event is generated , a mail alert will be send to me.
I want the monitoring to be active. Like whenever a event is generated , a mail should be sent to me instead of checking at regular intervals.
The SEL Log format is as follows:
server-001% sudo ipmitool sel list
b4 | 05/27/2009 | 13:38:32 | Fan #0x37 | Upper Critical going high
c8 | 05/27/2009 | 13:38:35 | Fan #0x37 | Upper Critical going high
dc | 08/15/2009 | 07:07:50 | Fan #0x37 | Upper Critical going high
So , for the above case whenever a new event is generated , automatically a mail alert should be send to me with the event.
How can I achieve this with a bash script . Any pointers will be highly appreciated.
I believe some vendors have special extensions in their firmware for exactly what you are describing (i.e. you just configure an e-mail address in the service processor), but I can't speak to each vendor's support. You'll have to look for your motherboard's documentation for that.
In terms of a standard mechanism, you are probably looking for IPMI PET (platform event trap) support. With PET, when certain SEL events are generated, it will generate a SNMP trap. The SNMP trap, once received by an SNMP daemon can do whatever you want, such as send an e-mail out.
A user of FreeIPMI wrote up his experiences in a doc and posted his scripts, which you can find here:
http://www.gnu.org/software/freeipmi/download.html
(Disclaimer: I maintain FreeIPMI so I know FreeIPMI better, unsure of support in other IPMI software.)
As an FYI, several IPMI SEL logging daemons (FreeIPMI's ipmiseld and ipmitool's ipmievtd are two I know) poll the SEL based on a configurable number of seconds and log the SEL information to syslog. A mail alert could also be configured in syslog to send out an e-mail when an event occurs. These daemons are still polling based instead of real-time, but the daemons will probably handle many IPMI corner cases that your cron script may not be aware of.
Monitoring of IPMI SEL events can be achieved using ipmievd tool . It is a part of ipmitool package.
# rpm -qf /usr/sbin/ipmievd
ipmitool-1.8.11-12.el6.x86_64
To send SEL events to syslog , execute the following command.
ipmievd sel daemon
Now , to simulate generation of SEL events , we will execute the following command.
ipmitool event 2
This will generate the following event
` Voltage Threshold - Lower Critical - Going Low`
To get the list of SEL events that can be generated are , try
# ipmitool event
usage: event <num>
Send generic test events
1 : Temperature - Upper Critical - Going High
2 : Voltage Threshold - Lower Critical - Going Low
3 : Memory - Correctable ECC
The event will be notified to /var/log/messages. Following message was generated in log file.
Oct 21 15:12:32 mgthost ipmievd: Voltage sensor - Lower Critical going low
Just in case it helps anyone else...
I created a shell script to record data in this format, and I parse it with php and use google's chart api to make a nice line graph.
2016-05-25 13:33:15, 20 degrees C, 23 degrees C
2016-05-25 13:53:06, 21.50 degrees C, 24 degrees C
2016-05-25 14:34:39, 19 degrees C, 22.50 degrees C
#!/bin/sh
DATE=`date '+%Y-%m-%d %H:%M:%S'`
temp0=$(ipmitool sdr type Temperature | grep "CPU0 Diode" | cut -f5 -d"|")
temp1=$(ipmitool sdr type Temperature | grep "CPU1 Diode" | cut -f5 -d"|")
echo "$DATE,$temp0,$temp1" >> /events/temps.dat
The problem I'm having now is getting the cron job to access the data properly, even though it's set in the root crontab.