I am struggling to make sense of the load test results from Jmeter.
I want to understand how much load a plain vanilla Laravel application can handle. I setup simple endpoints on plain Laravel 8 and tried on different AWS EC2 instances (t3.medium, t3.xlarge, load balanced setup etc).
The following routes were used were we forcefully increase the response time:
Route::get('/r0', function (Request $request) {
$data = [
'name' => 'test_name',
'image' => 'imageURL',
'translations' => [
'en' => 'test_english',
'te' => 'పంచాయతి సెక్రటరి',
],
];
return
response()->json($data, 200, ['Content-Type' => 'application/json;charset=UTF-8', 'Charset' => 'utf-8'], JSON_UNESCAPED_UNICODE);
});
Route::get('/r1', function (Request $request) {
sleep(2);
$data = [
'name' => 'test_name',
'image' => 'imageURL',
'translations' => [
'en' => 'test_english',
'te' => 'పంచాయతి సెక్రటరి',
],
];
return
response()->json($data, 200, ['Content-Type' => 'application/json;charset=UTF-8', 'Charset' => 'utf-8'], JSON_UNESCAPED_UNICODE);
});
Route::get('/r2', function (Request $request) {
sleep(10);
$data = [
'name' => 'test_name',
'image' => 'imageURL',
'translations' => [
'en' => 'test_english',
'te' => 'పంచాయతి సెక్రటరి',
],
];
return
response()->json($data, 200, ['Content-Type' => 'application/json;charset=UTF-8', 'Charset' => 'utf-8'], JSON_UNESCAPED_UNICODE);
});
I recorded the throughput and deviation as obtained from the “Graph results” and other details as obtained from the “Aggregate Report” on Jmeter for different end points.
Here’s a sample result for r1 on one of the setups:
From the above image/table, here are some of the questions I want answers for:
What is the load that server can handle for this end point? Is it the point where deviation is higher than throughput (as said in some of the blog posts on the internet) or when the error % is non zero?
How does one define the max load or capacity the server can handle? Can it be an absolute number like 200 users at any point?
For some end points, the server stops completely responding after a certain load. I had to restart the server before I could do further testing. Why does that happen?
Your table doesn't tell the full story regarding how many threads were active and what was the relationship between the number of threads, throughput, response time, errors, etc. I would rather suggest generating HTML Reporting Dashboard, it's way more informative than the aggregated numbers
Normally I would look for the saturation point - the point of maximum system performance, like:
start with 1 virtual user
gradually increase the load observing i.e. Transactions per Second chart. On well behaved system the throughput (number of transactions per second) should increase by the same factor as the number of virtual users, i.e. you increase the load twice - throughput should be twice higher. response times should be more or less equal
at some point you will see that throughput decreases and response time increases. if you look at Active Threads Over Time chart right before this moment - you will see how many virtual users were online at this stage - this is the number you're looking for
you can continue increasing the load to see when the errors start occurring or application terminates
I think point one provides the answer
There are many possible reasons, the most common/obvious are:
the server lacks essential resources like CPU, RAM, Network, it can be checked using JMeter PerfMon Plugin
the server is not properly configured for high loads, see i.e. 12 Tips for Laravel Performance Optimization in 2020 guide for example tuning tweaks
check your application and operating system logs, it might be the case it has been terminated due to consuming previously mentioned resources for example by OOM Killer
Related
I am developing a web player of sorts, I'm using the php framework Laravel to handle the data of the playlist. I create an array of the playlist with all the necessary information. With this array I make a howl instance of the playlist object when it needs to be played.
This works fluidly on Firefox & Chrome, both on desktop as on mobile. However I'm encountering issues when testing on Safari or iOS browsers.
What happens: The audio plays normally however at around 1-2 minutes into the song it loops back on itself to about 20-45secs ago. This creates a really annoying song where it's just repeating the same part of the song until it ends. Which it does. Because despite this looping back the app still continues ticking up the seconds of the song. (sound.seek() keeps ticking up.)
Looking at the network tab I've noticed something odd, whereas the other browsers only fetch the audio source once, Safari does this multiple times. This is about the only tangible change I've noticed.
Since I don't have 10 rep image goes here: https://imgur.com/Y48J52g
What the oddest part is that a locally hosted version doesn't have issues either. So is this a webserver issue? Browser? I'm at a loss.
The onloaderror and onplayerror events also don't fire either, so no issues there as far as I know.
Instancing the howl:
sound = data.howl = new Howl({
src: ['./get-audio' + data.file],
html5: true,
//After this I instance all onX functions (onplay, onend, etc)
...
sound.play()
Then whenever I need the next song I unload this howl instance and create the next one.
Most of my code is adjusted from the HowlerJS example of their 'player' in case you'd like to delve deeper in the code itself.
How the audio gets served:
public function getAudio($map, $name)
{
$fileName = $map.'/'.$name;
$file = Storage::disk('local')->get($fileName);
$filesize = Storage::disk('local')->size($fileName);
$size = $filesize;
$length = $size;
$start = 0;
$end = $size - 1;
return response($file)
->withHeaders([
'Accept-Ranges' => "bytes",
'Accept-Encoding' => "gzip, deflate",
'Pragma' => 'public',
'Expires' => '0',
'Cache-Control' => 'must-revalidate',
'Content-Transfer-Encoding' => 'binary',
'Content-Disposition' => ' inline; filename='.$name,
'Content-Length' => $filesize,
'Content-Type' => "audio/mpeg",
'Connection' => "Keep-Alive",
'Content-Range' => 'bytes 0-'.$end .'/'.$size,
'X-Pad' => 'avoid browser bug',
'Etag' => $name,
]);
}
So I'm not sure why Safari/iOS has an issue with the hosted version whilst locally it does work.
This is my first question on this site, so if you'd like some more information let me know.
I found out the issue.
Namely Safari thought I was serving an audio stream rather than just an mp3 file, causing it to continuously send requests. I solved this by serving my audio like this:
$path = storage_path().DIRECTORY_SEPARATOR."app".DIRECTORY_SEPARATOR."songs".DIRECTORY_SEPARATOR.$name;
$response = new BinaryFileResponse($path);
BinaryFileResponse::trustXSendfileTypeHeader();
return $response;
I'm trying to make a softlayer API call using Ruby to see upcoming maintenance and the machines that may be effected by the maintenance. I have a few questions but I'm running into an issue seeing many of the relational properties documented here:
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Notification_Occurrence_Event
Here is my simple program:
require 'rubygems'
require 'softlayer_api'
require 'pp'
client = SoftLayer::Client.new(:username => user, :api_key => api_key, :timeout => 99999999)
account = client['Account'].object_mask("mask[pendingEventCount]").getObject()
pending_event_count = account["pendingEventCount"]
current_time = Time.now.to_i
for i in 0..(pending_event_count/30.0).ceil - 1
list_of_pending_events = client['Account'].result_limit(i*30,30).object_mask("mask[id, startDate, endDate, recoveryTime, subject, summary]").getPendingEvents
for x in 0..list_of_pending_events.length - 1
start_time = DateTime.parse(list_of_pending_events[x]['startDate']).to_time.to_i
if start_time > current_time
pp list_of_pending_events[x]
end
end
end
The above works, but if I try to add a relational property to the mask, such as "impactedResources" it will fail saying that property does not belong to SoftLayer_Notification_Occurrence_Event. Can someone help explain why this, and many other, relational properties are not valid in the above call?
Also, two quick other questions on this topic:
1) Why do some of the results in getPendingEvents have start AND end times in the past? And why do some have a missing end time altogether? Notice I'm checking if the start time is greater than the current time as there seems to be old maintenance data in the results.
2) Am I taking the right approach for getting upcoming maintenance and figuring out machines that will be impacted? (Using getPendingEvents and then the 'impactedResources' property)
I used your code and I added the "impactedResources" to the object mask and it worked fine. You may be getting the issue, because your API user does not have enough permissions, I recomend you try with the master user.
regarding to your other questions:
1.- When I ran your code I got only 3 events whose startDate is greater than the current time, so it worked fine for me. If you still are getting that issue you can try using objectFilters.
require 'rubygems'
require 'softlayer_api'
require 'pp'
client = SoftLayer::Client.new(:username => user, :api_key => apikey, :timeout => 99999999)
object_filter = SoftLayer::ObjectFilter.new
object_filter.set_criteria_for_key_path('pendingEvents.startDate', 'operation' => 'greaterThanDate',
'options' => [{
'name' => 'date',
'value' => ['9/8/2016']
}])
list_of_pending_events = client['Account'].object_filter(object_filter).object_mask("mask[id, startDate, endDate, recoveryTime, subject, summary]").getPendingEvents
pp list_of_pending_events
2.- Yes, that approach wil work.
Regards
I'm developing a small script that does some data crunching. If I try to ab -n10 -c1 (benckmark sending requests one after another), the requests take ~750ms. If instead I try -c2 (send requests two at a time), the requests seem to take >2s. Here's how the code looks like:
get '/url/' do
# ...
# Images is a mongodb collection
Images.find({'searchable_data.i' => {
'$in' => color_codes
}}).limit(5_000).each do |image|
found_images << {
:url => image['url'],
:searchable_data => image['searchable_data'],
}
end
# ...
From debugging I noticed that the requests to mongo fire at roughly the same time, and return at roughly the same time (but they take >2x the time they would if I ran them one at a time. Also, I've watched the cpu/memory usage on the mongo processes, and mongo doesn't even flinch). Here's how I connect to mongo:
configure do
# ...
server_connection = Mongo::Connection.new(db.host, db.port, :pool_size => 60)
DB = server_connection.db(db_name)
DB.authenticate(db.user, db.password) unless (db.user.nil? || db.user.nil?)
Images = DB[:Images]
end
Is there something I'm doing wrong? I can't imagine the Mongodb driver being that bad.
What I want to do is to cache a page for 1 hour. The thing is that I want to be able to set the case stale during this 1 hour period if my object is modified.
Here is my code so far:
$response = new Response();
$response->setLastModified(new \DateTime($lastModified));
if ($response->isNotModified($this->getRequest()))
return $response;
else
$response->setCache(array(
'public' => true,
'max_age' => 3600,
's_maxage' => 3600,
));
The problem is the above code doesn't check lastModified. Once the 1 hour cache is created I have to wait full 60 minutes to see the changes I have made to my object ($lastModified).
Here is an example of caching page using Last-Modified header at the symfony2 documentation.
I think your mistake is that you tries to use Last-Modified and then rewrite it with Cache-Control header (max_age, s_maxage).
This question already has an answer here:
Duplicate DB sessions created upon Zend_Auth login
(1 answer)
Closed 2 years ago.
I'm trying to store sessions in a database using Zend Sessions however for some reason my sessions die out. Im not sure if there's some code being executed which does this or whether its something else.
I've noticed that the session ID seems to be regenerated after a breif time after having logged in.
This is even despite having added the following line in my htaccess file:
php_value session.auto_start 0
The end result is that I'm logged out every minute I'm logged in.
Heres my code in my bootstrap file
$config = array(
'name' => 'session',
'primary' => 'id',
'modifiedColumn' => 'modified',
'dataColumn' => 'data',
'lifetimeColumn' => 'lifetime'
);
$saveHandler = new Zend_Session_SaveHandler_DbTable($config);
Zend_Session::rememberMe($seconds = (60 * 60 * 24 * 30));
$saveHandler->setLifetime($seconds)->setOverrideLifetime(true);
Zend_Session::setSaveHandler($saveHandler);
//start your session!
Zend_Session::start();
I'm not using any other session related function except perhaps for Zend_Auth when logging in.
Infact rememberme calls the regenerateID function of the Session class - the end result is that I'm constantly logged out every few minutes now.
I think that you might be having this problem because you're calling rememberMe BEFORE starting the session.
You have to start the session first otherwise rememberMe won't do anything since it needs a session to set the rememberMe time on.
rememberMe calls the regenerateId function and the regeneration of the Id is what really needs the session to exist.
Place the rememberMe call after the session start then see how that works for you.
If that isn't it then I don't know what it could be since my code looks similar to yours.
Have you tried something like this?
protected function _initSession() {
$config = array(
'name' => 'session',
'primary' => 'id',
'modifiedColumn' => 'modified',
'dataColumn' => 'data',
'lifetimeColumn' => 'lifetime',
'lifetime' => 60*60*24*30,
);
Zend_Session::setSaveHandler(new F_Session_SaveHandler_DbTable($config));
}
This way the lifetime isn't set after initialising the database sessions, but is directly included in the initialisation options - it works for me, I see no reason why this should fail in your case :).
I think you need to look once into following values after bootstrap code
session.gc_maxlifetime
session.cookie_lifetime
If You configure session resources in *.ini config file, check resources.session.cookie_domain parameter.
I spend 3 hours when I remembered about it.