Yii2 slow active record - activerecord

I have a Yii2 based web project. Recently I've write some REST API to it.
I've realized, that every REST API call has a very long response time.
1134ms, 1250ms, 1034ms etc., so basically the avarage response time is above 1 second.
The Client model has no relation (Client table is a 'standalone' table).
My test table (client) is contains 173 record (1 row has 10 columns). I debugged the problem and marked the related line:
...
$client_id = Yii::$app->request->post('client_id');
// client_id ellenőrzése (pl. blokkolt-e a mobil kliens)
if (!empty($client_id)) {
$client = Client::findOne($client_id); <-----
...
So far I've not configured any cache components, because I don't think, that a table with 173 record is required that.
Without the mentioned findOne() line the response time is avarage 30ms.
The environment:
php 7,
Mysql 5.5,
Yii 2
What should be the problem ? Something in configuration? I developed another project with Yii 1.1 a few years ago, I didn't remember this kind of problem there.
Thank you.
UPDATE #1:
UPDATE #2:
I've noticed, that every activerecord realated operation takes about 1 second to finish (not just Client related operations). Getting 10 items to a gridview, update 1 record etc.
UPDATE #3:
Ok, something strange is happening. I've created a very simple action, which also requires ~1 second to render:
public function actionTest() {
echo "OK";
}
The login page requires avarage 32ms to load.

Ok, after a half a day searching I found the problem and the solution.
After I managed to start the Yii debug toolbar (ˇ2 hours :) ) I've realized this:
And after spending another few hours, I found the solution.
I had to replace this config line:
'dsn' => 'mysql:host=localhost;dbname=test',
to
'dsn' => 'mysql:host=127.0.0.1;dbname=test',
Maybe MySQL is not listening on IPv6 sockets or other configuration causes this problem, but now the average respone time is ˇ53ms.

Related

Drupal 9 - custom module caching issue

Longtime D7 user, first time with D9. I am writing my first custom module and having a devil of a time. My routing calls a controller that simple does this:
\Drupal::service('page_cache_kill_switch')->trigger();
die("hello A - ". rand());
I can refresh the page over and over and get a new random number each
time. But, when I change the code to:
\Drupal::service('page_cache_kill_switch')->trigger();
die("hello B - ". rand());
I still get "hello A 34234234" for several minutes. Clearing the cache doesn't help, all I can do is wait, it's normally about two minutes. I am at my wits end.
I thought it maybe an issue with my docker instance. So I generated a simple HTML file but if I edit then reload that file changes are reflected immediately.
In my settings.local.php I have disabled the render cache, caching for migrations, Internal Page Cache, and Dynamic Page Cache.
In my mymod.routing.yml I have:
options:
_admin_route: TRUE
no_cache: TRUE
Any hint on what I am missing would be deeply appreciated.
thanks,
summer

How do I use "maxPageSize" with the new Xrm.API?

Edit 2
It was a Microsoft bug. My CRM updated recently and the query is now executing as expected
Server version: 9.1.0000.21041
Client version: 1.4.1144-2007.3
Edit
If it is a Microsoft bug, which looks likely thanks to Arun's research, then for future reference, my CRM versions are
Server version: 9.1.0000.20151
Client version: 1.4.1077-2007.1
Original question below
I followed the example as described in the MSDN Documentation here.
Specify a positive number that indicates the number of entity records to be returned per page. If you do not specify this parameter, the value is defaulted to the maximum limit of 5000 records.
If the number of records being retrieved is more than the specified maxPageSize value or 5000 records, nextLink attribute in the returned promise object will contain a link to retrieve the next set of entities.
However, it doesn't appear to be working for me. Here's my sample JavaScript code:
Xrm.WebApi.retrieveMultipleRecords('account', '?$select=name', 20).then
(
result => console.log(result.entities.length),
error => console.error(error.message)
);
You can see that my query doesn't include any complex filter or expand expressions
maxPageSize is 20
When I run this code, it's returning the full set of results, not limiting the page size at all:
I noticed this too, but this happens only in UCI. Whereas this issue wont be reproduced when you run the same code in classic web UI.
Probably this is a bug in MS side, pls create a ticket so they can fix it.
UCI
Classic

Laravel Illuminate\Contracts\Http\Kernel response in index.php takes over a second to execute

I've been running some benchmark tests to find out why my application was running tremendously slow. Our application runs on an ec2 m3 instance with a mysql database on RDS. At first I thought it had something to do with RDS or a bad configuration. But as I started putting time checks in the code I came to the conclusion that as optimized as my code was - apparently the laravel kernal itself was taking a long time to execute.
In one of my main controllers the average execution time for all the code within the controller was around 200 - 175ms.
However the page would load taking an excruciating 1.3 seconds! There was definitely nothing wrong in the code for the controller so I figured something else must be causing the issue so I benchmarked the base code in the index.php file in the public directory of Laravel application and found that creating a Illuminate\Contracts\Http\Kernel object and getting/sending the response alone took 1120ms!
<?php
require __DIR__.'/../bootstrap/autoload.php';
$app = require_once __DIR__.'/../bootstrap/app.php';
// FROM HERE ->
$kernel = $app->make('Illuminate\Contracts\Http\Kernel');
$response = $kernel->handle(
$request = Illuminate\Http\Request::capture()
);
$response->send();
//<--to here takes 1120 ms of which 200 ms is my code in the controller
$kernel->terminate($request, $response);
I'm assuming this is a framework issue but how can I overcome this - a one second average response time is unacceptable here.
Probably there might be thousands reasons for this. What you should do is profile your code to verify what takes so long.
But the first thing I would consider is database connection and queries that are executed. If you execute for example 10 queries and each takes 100ms, queries execution will take 1s, so you might get total 1.3 seconds and it's nothing strange.
So you should verify your code what exactly is happening, verify execution time for simple controller action (that only returns some string for example), verify what providers are loaded etc because each such thing can affect general performance.

Laravel 5: Heavy Select Query

I have about 25.000 rows in my DB table 'movies' (InnoDB, 17.5 mb)
And when I try to get them all to display in my admin panel, nothing happens. Just 5-8 seconds pending and white screen. No displayed errors, just nothing. (max execution time is 3600 seconds, because it's on my local machine). My simple as hell code:
public function index()
{
$data['movies'] = Movies::all();
dd('This var_dump & die never fires');
// return view('admin.movies', $data);
}
I just wonder why it not performs the query and just die without declaration of war.
I didn't found anything interesting in .ENV or config/database.php to explain what happens in such situations.
PS. yes, I can make serverside pagination and search, and take only 10-25 records from the DB, question is not about that.
Looks like you are running out of memory. Try quering half, of the results, or maybe just 100 to see if that at least fixes the white page, if so use chunk:
Movies::chunk(200, function($movies)
{
foreach($movies $movie)
{
var_dump($movie);
}
});
You should definitely look at your storage\logs directory to verify the error. It's quite possible that it takes to much memory getting 25k rows.
In fact as you mentioned in real life there is no need to get so many rows because unless you export them into CSV or XLS.

Codeigniter output cache: what parts of the called controller function get executed?

One of the main purposes of caching is to save resources and not do things like hit your database every request. In light of this, I'm confused by what all Codeigniter does in a controller when it encounters a cache() statement.
For example:
$this->output->cache(5);
$data=$this->main_model->get_data_from_database();
$this->load->view("main/index", $data);
I realize that the cached main/index html file will show for the next 5 minutes, but during these 5 minutes will the controller still execute the get_data_from_database() step? Or will it just skip it?
Note: the Codeigniter documentation says you can put the cache() statement anywhere in the controller function, which confuses me even more about whats getting executed.
I can answer my own question. NOTHING in the controller function other than the cached output gets executed during the time in which the cache is set.
To test this yourself, do a database INSERT or something that would be logged somehow (e.g. write to a blank file).
I added the following code below my cache() statement and it only inserted into the some_table table the first time I loaded the controller function and not the 2nd time (within the 5 minute span).
$this->db->insert('some_table', array('field_name' => 'value1') );
I think this can be verified enabling the Profiler in your controller and check if any query is done. Make sure this is enabled only for your IP if you're using it in Production environment.
$this->output->enable_profiler(TRUE);
-- EDIT 1 --
This will be visible only once. Soon after the cached page is stored, the profiles result won't be visible again (so you might wanna delete the file and refresh the page).
-- EDIT 2 --
You might also use:
log_message('info', 'message');
inside your model, then change in config.php, $config['log_threshold'] to 3 and check the log file.
-- EDIT 3 --
For sure the selection will be done unless you have enabled the database cache. In this case, in the cache folder you'll see the database selection cached.

Resources