I currently process a paginated list with the following command:
paginatedList.each { it.update() }
However this is slow and I'd like to leverage GPars to update each item concurrently.
I have used:
GParsPool.withPool {
paginatedList.eachParallel { it.update() }
}
However when I run this I get:
Message: ORA-00942: table or view does not exist
The code runs serially for any number of items in the list, but it wont work concurrently for any number even if there is only one item in paginatedList.
The update function calls in data from several tables/views and other tables/views work fine but it stops on one particular view. The query it is trying works fine when executed manually (and is the same as when called serially, which works).
Can anyone help with why this doesn't work.
Thanks
Related
In my Cypress test, I'm comparing the data on a HTML table (which is paginated) against expected values (which are stored in an array).
Also, the number of records in the table is can vary.
The current amount of rows appearing on the table (the first page) is 5 records, and users can navigate to the other records using the Next/Previous/First/Last buttons as usual.
Here is my latest Cypress code:
cy.task('queryDb', `${myQuery}`).then(result => {
for (var i = 0; i < result.length; i++) {
dashboard.name(i).should('have.text', ` ${result[i].name} `)
}
})
The above for loop works for the 5 companies that appear on the UI, but it doesn't loop through the records that aren't visible on the screen.
Can someone please tell me how I can validate the remaining companies in the table?
Do I only do this for the first 5 records, click the 'Next' button, & then do the same for the next 5 records?
There are two very different things, and you may want to separate them into two tests:
You want to test the method that populates your HTML table and make sure you retrieve the expected results
You want to ensure that your HTML table is working as expected with the proper pagination
For (1) it would be easier to test your HTML table query URL and see if you can query all without the pagination. In this way, you will be able to ensure that the retrieved data are correct.
For (2) you know the data are correct. You want to make sure they are displayed as expected.. It may be helpful to try and validate the next and previous buttons.
In this way, you will know if the problem comes from the logic inside your UI component or if it comes from your backend.
I have got a Laravel 8 app. I would like to update a large database table from within my application. I added two new columns to the table and now I would like to update their content with the information of other columns per record.
My table has about 5.000.000 entries that need to be updated.
My problem is that the script being run from the browser either hits the memory exhaustion or the maximum execution time.
The controller function is as follows. I have set the retrieved number of rows to be limited to not run into the issues of time and memory.
public function fillDB()
{
$entries = TABLE_MODEL::query()
->orWhereNull("number_correct")
->orWhereNull("number_wrong")
->limit(5000)
->get();
foreach ($entries as $item) {
$item->setNumberCorrect($item->getNumberCorrectWrong(true));
$item->setNumberWrong($item->getNumberCorrectWrong(false));
$item->save();
}
}
I update the columns number_correct and number_wrong on each record where either one of these columns is still NULL. I cannot use a MySQL query to update the new columns, because I need to evaluate some information in each record to figure out the right values.
Is there another way of updating the table so I can run the update process at once?
Thank you very much in advance.
You can use chunk method to iterate over large data as it only fetches the specified number of records at a time. You can use laravel commands instead of running it from the browser.
$entries = TABLE_MODEL::query()
->orWhereNull("number_correct")
->orWhereNull("number_wrong")
->chunk(5000, function($entries) {
foreach ($entries as $item) {
$item->setNumberCorrect($item->getNumberCorrectWrong(true));
$item->setNumberWrong($item->getNumberCorrectWrong(false));
$item->save();
}
});
I think Generator on PHP can be useful when you're working with very large datasets.
A generator allows us to circumvent memory limit concerns by iterating over data without first building up a large array in memory.
EDIT :
Back with more details about Generators on Laravel collection using cursor() , for your example it will be something like :
public function fillDB()
{
$entries = TABLE_MODEL::query()
->orWhereNull("number_correct")
->orWhereNull("number_wrong")
->cursor();
foreach ($entries as $item) {
$item->setNumberCorrect($item->getNumberCorrectWrong(true));
$item->setNumberWrong($item->getNumberCorrectWrong(false));
$item->save();
}
}
I Just replaced get() with cursor() and removed the limit.
Using chunk could still lead to memory issues, at least in pre v8. This is because query results are buffered and kept in memory. In order to avoid this, and clearing memory after each chunk, you can add the following statement before you start iterating. I'm not sure this behaviour is still the case in v8, but it's worth to give it a try in your use case.
\DB::getPdo()->setAttribute(\PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, false);
More information in this Laravel framework Github issue
For a given user I want to get all the runs that
satisfy some conditions
get only runs from projects that the user has access to
In Users table, every user has a list of project ids while in Runs table, every run has a project id. The following is a query that works. Can it be optimized using concatMap?
r.table('users')
.inner_join(
r.table('runs').filter( lambda var_6: (<some_condns>),
lambda user, run: user['projects'].contains(run['project_id'])
)
.filter(lambda l: l['left']['id'] == '<user_id>').without('left')
I think equiJoin may not work because I am looking for item in a list as apposed to equality.
The below worked perfectly well for me -
r.table("users").get_all(user_id).map(lambda user:
{
"run_array": r.table("runs").filter(lambda var_6: (<some_condns>))
.filter(lambda run:
user['projects'].contains(run["project_id"]))
.coerce_to("array")
}).concat_map(lambda run: run['run_array'])
I am using the SPRING jdbcTemplate to extract some data from my DB using the following piece of code :
jdbctemplate.query(SELECT_SQL_QUERY, new RowCallbackHandler(){
public void processRow(resultsetRS) throws SQLException{
// Some operations here
}
});
When I use a select query,I want to know wether this method jdbctemplate.query() loads everything from the database before to process the data or loads row after another ?
I need the answer because I am using two SELECT queries and the second depends of the results of the first one (the results of the operations done on the selected data), means that if the second call loads everything before doing any treatement it won't take into accound the last changes of the first call (because i'm using paralellism in my code).
A SELECT query executed in JDBC returns all the rows as a set.
You have to execute the first query and then execute the second one.
This is the source of (n+1) death by latency.
A better solution might be to do it in one query: You'll only have one network round trip that way.
I'm trying to build a report in AX 2009 (SP1, currently rollup 6) with a primary data source of the SalesQuotationLine table. Due to how our inventory is structured, I need to apply a filter that shows only certain categories of items (in this case, non-service items as defined in the InventTable). However, it seems that there is a problem in the link between the SalesQuotationLine and InventTable such that only two specific items will ever display.
We have tested this against the Sales Quotation Details screen as well, with the same results. Executing a query such as this:
...will only show quotes that have one of the specific items mentioned earlier. If we change the Item Type to something else (for example to Item), the result is an empty set. We are also getting this issue on one of our secondary test servers, which for all intents is a fresh install.
There doesn't seem to be any issues with the data mapping from the one table to the other, and we are not experiencing this issue with any other table set. Is this a real issue, or am I just missing something?
After analyzing the results from a SQL Profile run during the execution of the query, it seems the issue was a system bug. When selecting a table to join to the SalesQuotationLines, you have two options: 'Items' and 'Items (Item Number)'. Regardless of which table you select the query executes with, it joins the InventTable with the relation "SalesQuotationLines.ProjTransCode = InventTable.ItemId".
After comparing the table to other layers in the system, I found the following block of code removed from the createLine method (in the SYP layer):
if (this.ProjTransType == QuotationProjTransType::Item)
{
this.ProjTransCode = this.ItemId;
}
Since the ProjTransCode is no longer being populated, the join does not work except on certain quote lines that do have the ProjTransCode populated.
In addition, there is no directly defined relation to the InventTable - the link is only maintained via an Extended Data Type that is used on the SalesQuotationLine.ItemId field. Adding this relation in manually solved the problem.