I have a case where the exact same eloquent builder query is being run twice. This is on purpose to try and figure this out. Basically when the query executes inside of the route middleware it takes over 5000ms to complete. However in the controller where I added the same query it only takes 0.06ms. So my question is why would the exact same query take such a long time in the middleware and not in the controller?
don't see why accessing the database in middleware would be bad practice. Take a permission system for example. Your middleware would have to verify that the logged in user is allowed to view the current page. There's no way to do that without querying the database (except if you'd get the permissions from somewhere else)
If this query probably runs on many requests you should make sure that you optimize it properly and reduce the query time to a minimum.
Related
I am looking for a way to put a constraint in place preventing access to data based on a site_uuid.
All the tables in my database would have a field site_uuid. Then I would ideally like to pass that site_uuid as a request header and essentially apply a where clause on every query checking the site_uuid is valid for the dataset return.
The idea is that its a shared database with multiple "sites". I understand scalability issues etc. This is not an issue in my case, the best approach is this multi-tenant approach if I can make it work.
I first hit this issue using the built-in _by_pk queries, when returning one object. I understand I can just use a where clause when returning a list of records.
I am to generate a report from the database with 1000's of records. This report is to be generated on monthly basis and at times the user might want to get a report spanning like 3 months. Already as per the current records, a month's data set can reach to like 5000.
I am currently using vue-excel to which makes an api call to laravel api and there api returns the resource which is now exported by vue-excel. The resource does not only return the model data but there are related data sets I also need to fetch.
This for smaller data sets works fine that is when I am fetching like 3000 records but for anything larger than this, the server times out.
I have also tried to use laravel excel with the query concern actually timed them and both take same amount of time because laravel excel was also mapping to get me the relations.
So basically, my question is: is there some better way to do this so as get this data faster and avoid the timeouts
just put this on start of the function
ini_set(max_execution_time, 84000); //84000 is in seconds
this will override the laravel inbuild script runtime max value.
I have created news website in MVC.
I have search functionality on it.
When Index Action of Search Controller is called, it fetches records from database, it returns Search View.
This Search View has AJAX Pager for paging, when Next or Previous button of Pager is clicked, AJAX request is made to Paging Action of Search Controller.
Now I don't want again to make call to my Database. I want to use results which were fetched during Index action of Search Controller.
For now I have used Session[""] object.
I want to know what is better to used for state management in this scenario.
Results fetched from database can be around 1000-5000, ArticleName, ArticleShortDescription (~200 characters)
ViewBag or ViewData are only persistent in the current request. As such, they are not usable.
TempData persists until the next request, but this could be anything, so there's no guarantee it persists long enough for you to make your Ajax call (or subsequent ajax calls).
Realistically, Session is your only decent option in this case, though it's still not optimal.
You'll be storing a lot of information, which may not even be requested by the client. Even then, cleaning it up after it's no longer needed might prove hard as well.
Your best bet would be to make calls to the database which take paging into account, so you only ever return a subset of the data each request, rather than just pulling out all the data.
You should not use any of those. Session are created per user, if you are storing 1000 - 5000 articles for each user using your search, you are going to have a bad time. ViewData are fundamentally Session object with a nice wrappers, so it's also bad for your use case.
Let's say you decide to use HttpRuntime.Cache instead, so that you are not putting all the result on a per-user basis, then you have to worry about how long to store the objects in cache.
The logical approach would be to query the database with pagination
To prevent hitting your database so frequently, then you should cache the paged result with the search term + page number + page size (optional) as your cache key and store your result objects as the cache value, ideally with the cache expiration set. (You wouldn't want to serve stale search results till your cache gets evicted right?)
I avoid using session state as it affects how your application scales in a load balanced environment. You have to ensure a user always has requests served from the same server because that is where session state is stored (unless you put it in the database, but that defeats the point in your situation).
I would try to use application caching. It does mean, if the user clicks Next or Prev and that request is served from another server, you'll have to go to the database again - but personally I would prefer to take that hit.
Have a look at this page, in particular scroll down to the Application Caching section.
Hope this helps.
Laravel 4 has a query cache built into its query builder: just add ->remember(), according to the docs.
Can anybody tell me how I can apply this method to all queries in my application, without appending ->remember() to each and every database call in it? Some kind of after filter, I suppose.
You might be able to extend the query builder and simply overload the get() method to first call remember(), and then do the get() statement.
Practically, though, if you want to cache every single query, you might as well just do this at the database level. MySQL, for example, has a configuration option to automatically cache all queries for a certain amount of time. However, in an application that does a lot of inserts/updates/deletes, this will have poor performance since the cache is cleared for that table on every such call.
Using Laravel for every query would also mean getting outdated data if you do inserts/updates/deletes meanwhile, so you'd have to clear the cache every time you update.
Best practice would be to diligently decide if a query should be cached or not.
I have a user object represented in JPA which has specific sub-types. Eg, think of User and then a subclass Admin, and another subclass Power User.
Let's say I have 100k users. I have successfully implemented the second level cache using Ehcache in order to increase performance and have validated that it's working.
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html#performance-cache
I know it does work (ie, you load the object from the cache rather than invoke an sql query) when you call the load method. I've verified this via logging at the hibernate level and also verifying that it's quicker.
However, I actually want to select a subset of all the users...for example, let's say I want to do a count of how many Power Users there are.
Furthermore, my users have an associated ZipCode object...the ZipCode objects are also second level cached...what I'd like to do is actually be able to ask queries like...how many Power Users do i have in New York state...
However, my question is...how do i write a query to do this that will hit the second level cache and not the database. Note that my second level cache is configured to be read/write...so as new users are added to the system they should automatically be added to the cache...also...note that I have investigated the Query cache briefly but I'm not sure it's applicable as this is for queries that are run multiple times...my problem is more a case of...the data should be in the second level cache anyway so what do I have to do so that the database doesn't get hit when I write my query.
cheers,
Brian
(...) the data should be in the second level cache anyway so what do I have to do so that the database doesn't get hit when I write my query.
If the entities returned by your query are cached, have a look at Query#iterate(). This will trigger a first query to retrieve a list of IDs and then subsequent queries for each ID... that would hit the L2 cache.