How do I use cy.each() when each time the page reloads the images? - cypress

Desired Outcome: I would like to write a test that clicks on each of these "X"s. I would like to do this until there are not any images left.
Use Case:
I reload the list of images each time, to ensure I backfill up to 15.
The user has 18 images
On the first page I show 15
When the user deletes 1 image, I reload the images so there are 15 on page 1 again, and now only 2 images on page 2.
Error:
Because of the reload of the images, it is causing the .each functionality from Cypress to break with the following error message:
cy.click() failed because the page updated as a result of this
command, but you tried to continue the command chain. The subject is
no longer attached to the DOM, and Cypress cannot requery the page
after commands such as cy.click().
Cypress Code Implementation:
cy.get('[datacy="deleteImageIconX"]').each(($el) => cy.wrap($el).click());
What can I do to run a successful test that meets my use-case?

I've seen this before, this answer helped me tremendously waiting-for-dom-to-load-cypress.
The trick is not to iterate the elements, rather use a loop over the total (18) and to confirm deletion within the loop.
for (let i = 0; i < 18; i++) {
cy.get('[datacy="deleteImageIconX"]')
.first()
.click()
.should('not.exist');
}
This strategy eliminates the problems:
trying to work with stale element references (detached)
iterating too fast (confirm current action completes first)
page count being less than total count (page adjusts between iterations)

Related

How do I navigate and select entries (for POSTing) from a matrix of values which run into several pages?

Please could any of you help me / give suggestions on how I can achieve this. A matrix (10 rows and 12 columns) of entries run on to several pages (page by page with a link to the next page). I need to select the entries and make payment for every run. It is not a good idea to create samplers page by page so I am trying to achieve below:
{
1. If entries found >= 20 on the first page:
a. HTTP POST
b. Go to step-4
2. If entries < 20 AND Next page (link) exists:
a. Click Next Page link (HTTP POST)
b. Go to Step-1
3. If entries < 20 AND Next page does not exist:
a. Print a message
4. Payment Page
}
The JMeter components you will need are:
If Controller - for choosing next request depending on the number of results
Module Controller - as the target for the If Controller
More information: Easily Write a GOTO Statement in JMeter

Iterate over array while adding new elements to array

I'm writing a web scraping script in Ruby that opens a used car website, searches for a make/model of car, loops over the pages of results, and then scrapes the data on each page.
The problem I'm having is that I don't necessarily know the max # of pages at the beginning, and only as I iterate closer to the last few known pages does the pagination increase and reveal more pages.
I've defined cleanpages as an array and populated it with what I know are the available pages when first opening the site. Then I use cleanpages.each do to iterate over those "pages". Each time I'm on a new page I add all known pages back into cleanpages and then run cleanpages.uniq to remove duplicates. The problem seems to be that cleanpages.each do only iterates as many times as its original length.
Can I make it so that within the each do loop, I increase the number of times it will iterate?
Rather than using Array#each, try using your array as a queue. The general idea is:
queue = initial_pages
while queue.any?
page = queue.shift
new_pages = process(page)
queue.concat(get_unprocessed_pages(new_pages))
end
The idea here is that you just keep taking items from the head of your queue until it's empty. You can push new items into the end of the queue during processing and they'll be processed correctly.
You'll want to be sure to remove pages from new_pages which are already in the queue or were already processed.
You could also do it by just keeping your array data structure, but manually keep a pointer to the current element in your list. This has the advantage of maintaining a full list of "seen" pages so you can remove them from your new_pages list before appending anything remaining to the list:
index = 0
queue = initial_pages
while true do
page = queue[index]
break if page.nil?
index += 1
new_pages = get_new_pages(page) - queue
queue.concat(new_pages)
end

Why do I see a big difference in loadtime between jMeter and user experience when browsing?

My problem is that I see the load time for a web page element on a test in jMeter # 200 miliseconds and when browsing, most of the time I get 3 or 4 seconds, in the condition where the size in bytes is # 331000.
I must mention that I cleared the cache and cookies for each iteration and I inserted also the constant timer between the steps.
The searching an id is the actual case described previously.
var pkg = JavaImporter(org.openqa.selenium);
var wait_ui = JavaImporter(org.openqa.selenium.support.ui.WebDriverWait);
var wait = new wait_ui.WebDriverWait(WDS.browser, 5000);
WDS.sampleResult.sampleStart()
var searchBox = WDS.browser.findElement(pkg.By.id("140:0;p"));
searchBox.click();
searchBox.sendKeys("1053606032");
searchBox.sendKeys(org.openqa.selenium.Keys.ENTER);
WDS.sampleResult.sampleEnd()
I expected to see the same load time results, but maybe an option would be if I wait until some elements on the search results page are visible. But I cannot bring an argument why is this difference. I had another case where the page loads in 10 seconds in Chrome and in jMeter Test Results 300 miliseconds.
Please try with wait until for a specific element which loads as close as the page load.
Below is another try for the same. Use the below code and check if this helps:-
WDS.sampleResult.sampleStart()
WDS.browser.get('http://jmeter-plugins.org')
//(JavascriptExecutor)WDS.browser.executeScript("return document.readyState").toString().equals("complete")
WDS.browser.executeScript("return document.readyState").toString().equals("complete")
WDS.sampleResult.sampleEnd()
For me without execute script page loads in 3 sec and with executeScript it loads in 7 sec..while in browser that loads in around 7.57sec..
Hope this helps.

THREE JS DefaultLoadingManager onProgress function returns wrong number of total items first

I used this code to calculate the percentage of loading:
THREE.DefaultLoadingManager.onProgress = (item, loaded, total) => {
console.error(loaded / total * 100);
};
It reaches about 80% and then returns to 60% then it reaches 90% and then returns to 80%.
After some debugging, I found that the number of total items first is 25 and then increases to about 35 and after that, it reaches 52.
This increase is because of JSONLoader. I load some objects and these objects have materials as images, so onProgress function adds these images to the total number of items to be loaded.
I want to know how to know the real number of items to be loaded (52) at the start. If it is not possible, how to solve the going-back from 80% to 60%?
A couple things you could do:
You can run your load once, and record the results the final count, and hardcode it for the next run.
or..
Use a format like GLTF with all the assets embedded. Then you'll get one item per model.
or..
Fire off all your loads in parallel.. don't respond to the first few onProgress.. and hopefully capture the complete item count before you start displaying progress..
or.. make a progress bar that always advances by some percentage of the remaining time, and maybe adjust that percentage to roughly match the load time on your single machine, or adjust it dynamically as you get more information about remaining loads.

Handle StaleElement exception

I have a table in which data can be refreshed by selecting some filter checkboxes. One or more checkboxes can be selected and after each is selected a spinner is displayed on the page. Subsequent filters can only be selected once the previous selection has refreshed the table. The issue I am facing is that I keep getting StaleElementException intermittently.
This is what I do in capybara -
visit('/table-page') # table with default values is displayed
# select all filters one by one. Wait for spinner to disappear after each selection
filters.each {|filter| check(filter); has_no_css?('.loading-overlay', wait: 15)}
# get table data as array of arrays. Added *minimum* so it waits for table
all('tbody tr', minimum: 1).map { |row| row.all('th,td').map(&:text) }
I am struggling to understand why am I seeing StaleElementException. AFAIK Capybara uses synchronize to reload node when using text method on a given node. It also happens that sometimes the table data returns stale data(i.e the one before the last filter update)
The use of all or first disables reloading of any elements returned (If you use find the element is reloadable since the query used to locate the element is fully known). This means that if the page changes at all during the time the last line of your code is running you'll end up with the StaleElement errors. This is possible in your code because has_no_css? can run before the overlay appears. One solution to this is to use has_css? with a short wait time, to detect the overlay before checking that it disappears. The has_xxx? methods just return true/false and don't raise errors so worst case has_css? misses the appearance/disappearance of the overlay completely and basically devolves into a sleep for the specified wait time.
visit('/table-page') # table with default values is displayed
# select all filters one by one. Wait for spinner to disappear after each selection
filters.each do |filter|
check(filter);
has_css?('.loading_overlay', wait: 1)
assert_no_selector('.loading-overlay', wait: 15)
end
# get table data as array of arrays. Added *minimum* so it waits for table
all('tbody tr', minimum: 1).map { |row| row.all('th,td').map(&:text) }

Resources