how to stop ABP from querying EMPTY AbpUserOrganizationUnits TABLE - aspnetboilerplate

We use ABP with a MSSQL database hosted on Azure.
in scope of Cost optimization we need to make as less requests to DB as possible.
during investigation i found out that ABP does around 50 millions of requests to table AbpUserOrganizationUnits per month and we don't this table at all
I would like to disable all calls to this table.
making 50 millions requests to database to receive an empty set is not what a normal product should do.
i woudl like to know if there is a way to stop this requests or if redirect them to a redis Cache or even stop them inside the API

Related

Cache and update regularly complex data

Lets star with background. I have an api endpoint that I have to query every 15 minutes and that returns complex data. Unfortunately this endpoint does not provide information of what exactly changed. So it requires me to compare the data that I have in db and compare everything and than execute update, add or delete. This is pretty boring...
I came to and idea that I can simply remove all data from certain tables and build everything from scratch... But it I have to also return this cached data to my clients. So there might be a situation that the db will be empty during some request from my client because it will be "refreshing/rebulding". And that cant happen because I have to return something
So I cam to and idea to
Lock the certain db tables so that the client will have to wait for the "refreshing the db"
or
CQRS https://martinfowler.com/bliki/CQRS.html
Do you have any suggestions how to solve the problem?
It sounds like you're using a relational database, so I'll try to outline a solution using database terms. The idea, however, is more general than that. In general, it's similar to Blue-Green deployment.
Have two data tables (or two databases, for that matter); one is active, and one is inactive.
When the software starts the update process, it can wipe the inactive table and write new data into it. During this process, the system keeps serving data from the active table.
Once the data update is entirely done, the system can begin to serve data from the previously inactive table. In other words, the inactive table becomes the active table, and vice versa.

Laravel pagination in Data Table

I am using DataTable plugin in Laravel. I have a record of 3000 entries in some
But when i load that page it loads all 3000 records in the browser then create pagination, this slow down the page loading.
How to fix this or correct way
Use server-side processing.
Get help from some Laravel Packages. Such as Yajra's: https://yajrabox.com/docs/laravel-datatables/
Generally you can solve pagination either on the front end, the back end (server or database side), or a combination of both.
Server side processing, without a package, would mean setting up TOP/FETCH or make rows in data being returned from your server. 

You could also load a small amount (say 20) and then when the user scrolls to the bottom of the list, load another 20 or so. I mention the inclusion of front end processing as well because I’m not sure what your use cases are, but I imagine it’s pretty rare any given user actually needs to see 3000 rows at a time.

Given that Data Tables seems to have built-in functionality for paginating data, I think that #tersakyan is essentially correct — what you want is some form of back-end filtering or paginating of rows of data to limit what’s being sent to the front end.

I don’t know if that package works for you or not or what your setup looks like, but pagination can also be achieved directly from a DataBase returning data via the SQL (using TOP/FETCH for example) or could be implemented in a Controller or Service by tracking pages of data and “loading a page at a time” both from the server and then into the table. All you would need is a unique key to associate each "set of pages" for a specific request.
But for performance, you want to avoid both large data requests and operations on large sets of data. So the more you limit how much data is being grabbed or processed at any stage of your application using it, the more performant your application will be in principle.




CouchDB - Mobile application architecture - Replication performance

I built a mobile application based on CouchDB.
For security reason, i have to make sure that a document can be read only by the users allowed to do do it. As i cannot manage the access right at document level, i create one couchdb database per user, and i replicate documents from my main couchDB database in each user database with a filtered replication.
This model work very well, but today i faced huge performance issues.
I tried to have all my replications continuous, filtered and bi-directionnal, but after 80 users (so 81 databases and 160 simultanous continuous replications), there was too much replications and my couchDB service start to slow down and even crashed sometimes. Notices that all the databases are on the same server (and i could not have more than one server)
I tried to put in place a "manual" replications, but even this way when i need to replicate a document from my main database to all my 80 users databases, each filtered replication from my main database to a user database take around 30 seconds.
Maybe i have an issue with my replication filter, i store for each document a list of users allowed to see it. As each user has it own database, i replicate only the document the user is allowed to see in its database. Here is my replication function :
function(doc, req) {
if(doc.userList) {
if(doc.userList.indexOf(req.query.username) > 1) {
return true;
}
}
return false;
}
The goal of my application is to get around 1000 users, that is totally impossible with the current architecture / performance.
I have three questions :
1. Even if i think that it's not possible, Is it possible to get about 1000 databases in continuous replication on the same server?
2. Is there anything wrong with my replication filter? Is there any way to improve it to have fast databases replications?
3. If the current architecture is not good at all, what kind of architecture would you advise in my case?
Thank you very much !
We finally changed our global project architecture.
The main server cannot handle more than 100 replicated databases even if the configuration limits can be changed, after 80 synchronied databases couchdb logs start to explode. I may wrong, but i think that this kind of architecture is not possible on a single server.
Here is the solution we put in place.
We removed all the users databases and we plugged all our mobile applications directly on the main database and do a filtered replication directly on the main database : http://pouchdb.com/api.html#replication by using this solution : Example 3: filter function inside of a design document
This new model is now working we did some stress tests and we didn't get any issue until 1000 simultaneous users.
Just be aware that pouchDB, to replicate a database, ask couchdb all the modifications applied on the main database since the last synchronisation (even for filtered replication). So when you create a new pouchdb database and synchronise it, if your main couchDB is old and has a big historical (check couchdb _changes API), it can take a very (very) long time !
Step 0 is always identify the bottleneck. My first guess based on your scenarioe outlined would be to look at I/O perf. Check out
GET /_stats/couchdb
and
GET /_active_tasks
Each database gets its own read & write file descriptors so as the number of open databases on the server increases, so does the I/O resources required. Hope this helps

Dynamics AX Preload Behaviour

Questions
Does the user option preload refer to caching on the client or on the server?
Are there any ways to make this occur asynchronously so that users don't take a large performance hit when first requesting data from a table?
More Info
In Dynamics Ax 2012, under File > User Options > Preload a user can select which tables are preloaded the first time they're accessed.
I've not found anything to say whether this behaviour relates to caching on the client or the AOS.
The fact it's a user setting implies that it's the client.
But it could be an AOS setting where users with this option take the initial hit of preloading the entire table, whilst those without would benefit from any caching caused by other users, but wouldn't trigger the load themselves.
If it's the latter we could improve performance by removing this option from all (human) users, leaving it enabled only on our batch user account, having scheduled jobs on each AOS to request a record from each table, thus triggering the preload without any user being negatively impacted.
Ref: http://dynamicbusinesssolutions.ru/axshared.en/html/9cd36702-2fa7-470c-a627-08
If a table is large or frequently changed it is not a candidate for entire table cache. This applies to ordinary users and batch users alike.
The EntireTable cache is located on the server, but the load is initiated by the user, the first user doing the select takes a performance hit.
To succesfully disable a table from preload, you can disable it using the Admin user, it will apply to all users. Or you can let all users disable it by themselves.
Personally I never change the user setup. If a table is large I change the table CacheLookup property as a customization.
See Set-based Caching:
When you set a table's CacheLookup property to EntireTable, all the
records in the table are placed in the cache after the first select.
This type of caching follows the rules of single record caching. This
means that the SELECT statement WHERE clause must include equality
tests on all fields of the unique index that is defined in the table's
PrimaryIndex property.
The EntireTable cache is located on the server
and is shared by all connections to the Application Object Server
(AOS). If a select is made on the client tier to a table that is
EntireTable cached, it first looks in its own cache and then searches
the server-side EntireTable cache.
An EntireTable cache is created for
each table for a given company. If you have two selects on the same
table for different companies the entire table is cached twice.
Note: Avoid using EntireTable caches for large tables because once
the cache size reaches 128 KB the cache is moved from memory to disk.
A disk search is much slower than an in-memory search.

Caching expensive SQL query in memory or in the database?

Let me start by describing the scenario. I have an MVC 3 application with SQL Server 2008. In one of the pages we display a list of Products that is returned from the database and is UNIQUE per logged in user.
The SQL query (actually a VIEW) used to return the list of products is VERY expensive.
It is based on very complex business requirements which cannot be changed at this stage.
The database schema cannot be changed or redesigned as it is used by other applications.
There are 50k products and 5k users (each user may have access to 1 up to 50k products).
In order to display the Products page for the logged in user we use:
SELECT TOP X * FROM [VIEW] WHERE UserID = #UserId -- where 'X' is the size of the page
The query above returns a maximum of 50 rows (maximum page size). The WHERE clause restricts the number of rows to a maximum of 50k (products that the user has access to).
The page is taking about 5 to 7 seconds to load and that is exactly the time the SQL query above takes to run in SQL.
Problem:
The user goes to the Products page and very likely uses paging, re-sorts the results, goes to the details page, etc and then goes back to the list. And every time it takes 5-7s to display the results.
That is unacceptable, but at the same time the business team has accepted that the first time the Products page is loaded it can take 5-7s. Therefore, we thought about CACHING.
We now have two options to choose from, the most "obvious" one, at least to me, is using .Net Caching (in memory / in proc). (Please note that Distributed Cache is not allowed at the moment for technical constraints with our provider / hosting partner).
But I'm not very comfortable with this. We could end up with lots of products in memory (when there are 50 or 100 users logged in simultaneously) which could cause other issues on the server, like .Net constantly removing cache items to free up space while our code inserts new items.
The SECOND option:
The main problem here is that it is very EXPENSIVE to generate the User x Product x Access view, so we thought we could create a flat table (or in other words a CACHE of all products x users in the database). This table would be exactly the result of the view.
However the results can change at any time if new products are added, user permissions are changed, etc. So we would need to constantly refresh the table (which could take a few seconds) and this started to get a little bit complex.
Similarly, we though we could implement some sort of Cache Provider and, upon request from a user, we would run the original SQL query and select the products from the view (5-7s, acceptable only once) and save that result in a flat table called ProductUserAccessCache in SQL. Next request, we would get the values from this cached-table (as we could easily identify the results were cached for that particular user) with a fast query without calculations in SQL.
Any time a product was added or a permission changed, we would truncate the cached-table and upon a new request the table would be repopulated for the requested user.
It doesn't seem too complex to me, but what we are doing here basically is creating a NEW cache "provider".
Does any one have any experience with this kind of issue?
Would it be better to use .Net Caching (in proc)?
Any suggestions?
We were facing a similar issue some time ago, and we were thinking of using EF caching in order to avoid the delay on retrieving the information. Our problem was a 1 - 2 secs. delay. Here is some info that might help on how to cache a table extending EF. One of the drawbacks of caching is how fresh you need the information to be, so you set your cache expiration accordingly. Depending on that expiration, users might need to wait to get the fresh info more than they would like to, but if your users can accept that they migth be seing outdated info in order to avoid the delay, then the tradeoff would worth it.
In our scenario, we decided to better have the fresh info than quick, but as I said before, our waiting period wasn't that long.
Hope it helps

Resources