How do I save Cypress's testrunner's "console log" (left hand side) to a file - cypress

I would like to save the data from the left-hand-side of the TestRunner to a text file (json, plain text, or any kind of text).
I feel like this should be very easy, and that I'm simply just missing something. However, I cannot find anything to explain this. I have checked this other S.O. question: Cypress pipe console.log and command log to output, which references this currently open issue -- but this appears to be focused on collecting the browsers console log.
I even tried one of the workarounds suggested in the discussion of that open issue, cypress-log-to-output - but that put a ton of output in the terminal from which I launched the test. I did try to correlate the extra output to the relatively few entries from the TestRunner's left-hand-side, but did not see anything to match them up.
I'm just hoping to get a text file that looks like this (with perhaps a bit of detail for each entry):
1 visit /
(xhr) GET 200 /todos
2 wait #todos
(req) GET /todos Received todos
...
Or perhaps JSON.
My motivation comes from having to write Cypress tests for our CI that will be testing a very old AjaxSwing based application that makes heavy use of XHR requests, and it can be a different number of XHR requests for each test run (sometimes 8, sometimes 12 just to load the first page).
The AjaxSwing app is not changing, so I have to figure this out as best as possible. So I wanted to see a whole text file with all the information from the TestRunner's left hand side. Perhaps even compare separate runs to see if I can spot some "header" or "body" value I could use to distinguish the right XHR request to wait for.
Any help would be appreciated.

One approach using the log:added event
// top of spec
const logs = []
Cypress.on('log:added', (log) => {
const message = `${log.consoleProps.Command}: ${log.message}`
logs.push(message)
})
it('writes to logs', () => {
... // some commands that log
cy.writeFile('logs.txt', logs)
});

Related

Dropzone.js - Multiple file upload without duplicated response

TLDR;
I managed to simplify my question after a good night's sleep. Here's the simpler question.
I want to upload N files to a server, which would process them together and return a single response (e.g. Total foobars in all files combined = XYZ).
What's the best way to send this single response back to the client?
Thanks.
&
Below is the old question, left behind as a lesson for me.
I'm using Dropzone.js to build D&D functionality into my app.
Please note: I know there are a couple of questions already that discuss multifile uploads. But they are different from my question. They talk about how to get a single callback call instead of multiple ones.
My issue is related to the situation where I drag and drop multiple files into the dropzone, but am seeing the single server response being duplicated multiple times. Here is my config:
Dropzone.options.inner = {
init: function() {
this.on("dragenter", function(e) {
$('#inner').addClass('drag-over');
//// TODO - find out WTF this isn't working (low priority)
}),
this.on("completemultiple", function(file, resp) {
//// TODO
})
},
url: "php/...upload...php",
timeout: 120000, // 2m
uploadMultiple: true,
autoProcessQueue: false,
clickable: false,
};
//// ... Some other stuff
//// ...
$(document).ready(function() {
$('#inner').click(function() {
Dropzone.forElement('.dropzone').processQueue();
});
In the beginning I intercepted the "complete" event, rather than "completemultiple". That resulted in its handler being invoked multiple separate times (once for each file), even though the server-side php was only being invoked once. Each invocation returned a duplicate copy of the same server-side message.
I didn't want that, so I changed it to "completemultiple", and now I can confirm that the handler only gets called once with an array of files, but the single server response is now buried within each file object returned - each has a duplicate copy of the exact same response.
It doesn't matter ultimately because it is the same message, after all. But the whole esthetics of the thing now seems off which indicates to me I'm doing something wrong - the response seems to indicate two independent uploads, but they were part of a single invocation of the server side php. Why make the client "believe" there were two separate upload requests when the server-side script only has one opportunity to respond (i.e. The php is not sending back different messages for each file - should it? And if so, what's the best way to do it?)
How can I make it so that if I have a scenario in which it's all-or-none, I get a single response back from the php script?
This is especially important to me because my server response will contain the status and some other data. The script does more than simply receiving the uploaded files (hence the longer timeout).
I thought maybe that's a sign that I should separate the uploading part from the processing part and trigger the processing once the upload is complete.
But that means that the server side upload script can't clean up after itself. It needs to persist data beyond its own life. Also it now needs to return a handle to this data back to the client, which would dispatch the server-side processor in a different ajax call passing it this handle - and the subsequent call needs to clean up the files left by the uploader after it is done processing them.
This seems the less elegant solution. Is this something I just need to get used to? Or is there a better way of accomplishing what I want?
Also, any other free tips and hints from the front-end gurus in my network will be gratefully accepted.
Thanks.
&
The following approach works. Until something better can be found.
Dropzone.options.inner = {
// . . .
init: function() {
this.on("completemultiple", function(file) {
var code = JSON.parse(file[0].xhr.response).code;
var data = { "code" : code };
$.post('php/......php', data, function(res) {
// TODO - surface the res back to the user
});
})
},
};
&

Non helpfull error message Calabash with page objects pattern

I'm currently using Calabash framework to automate functional testing for a native Android and IOS application. During my time studying it, I stumbled upon this example project from Xamarin that uses page objects design pattern which I find to be much better to organize the code in a Selenium fashion.
I have made a few adjustments to the original project, adding a file called page_utils.rb in the support directory of the calabash project structure. This file has this method:
def change_page(next_page)
sleep 2
puts "current page is #{current_page_name} changing to #{next_page}"
#current_page = page(next_page).await(PAGE_TRANSITION_PARAMETERS)
sleep 1
capture_screenshot
#current_page.assert_info_present
end
So in my custom steps implementation, when I want to change the page, I trigger the event that changes the page in the UI and update the reference for Calabash calling this method, in example:
#current_page.click_to_home_page
change_page(HomePage)
PAGE_TRANSITION_PARAMETERS is a hash with parameters such as timeout:
PAGE_TRANSITION_PARAMETERS = {
timeout: 10,
screenshot_on_error: true
}
Just so happens to be that whenever I have a timeout waiting for any element in any screen during a test run, I get a generic error message such as:
Timeout waiting for elements: * id:'btn_ok' (Calabash::Android::WaitHelpers::WaitError)
./features/support/utils/page_utils.rb:14:in `change_page'
./features/step_definitions/login_steps.rb:49:in `/^I enter my valid credentials$/'
features/04_support_and_settings.feature:9:in `And I enter my valid credentials'
btn_ok is the id defined for the trait of the first screen in my application, I don't understand why this keeps popping up even in steps ahead of that screen, masking the real problem.
Can anyone help getting rid of this annoyance? Makes really hard debugging test failures, specially on the test cloud.
welcome to Calabash!
As you might be aware, you'll get a Timeout waiting for elements: exception when you attempt to query/wait for an element which can't be found on the screen. When you call page.await(opts), it is actually calling wait_for_elements_exist([trait], opts), which means in your case that after 10 seconds of waiting, the view with id btn_ok can't be found on the screen.
What is assert_info_present ? Does it call wait_for_element_exists or something similar? More importantly, what method is actually being called in page_utils.rb:14 ?
And does your app actually return to the home screen when you invoke click_to_home_page ?
Unfortunately it's difficult to diagnose the issue without some more info, but I'll throw out a few suggestions:
My first guess without seeing your application or your step definitions is that #current_page.click_to_home_page is taking longer than 10 seconds to actually bring the home page back. If that's the case, simply try increasing the timeout (or remove it altogether, since the default is 30 seconds. See source).
My second guess is that the element with id btn_ok is not actually visible on screen when your app returns to the home screen. If that's the case, you could try changing the trait definition from * id:'btn_ok' to all * id:'btn_ok' (the all operator will include views that aren't actually visible on screen). Again, I have no idea what your app looks like so it's hard to say.
My third guess is it's something related to assert_info_present, but it's hard to say without seeing the step defs.
On an unrelated note, I apologize if our sample code is a bit outdated, but at the time of writing we generally don't encourage the use of #current_page to keep track of a page. Calabash was written in a more or less stateless manner and we generally encourage step definitions to avoid using state wherever possible.
Hope this helps! Best of luck.

Programmatically change database for heroku dataclips

We just upgraded our Heroku postgres database using the follower changeover method. We have over 50 dataclips attached to the old database, and now we need to move them over to the new database. However, doing them one by one will take a lot of time.
Is there a programatic way to update the database a dataclip is attached to, perhaps with the CLI tools?
At least once the old database has been deprovisioned, you can now (as of March 2016) reattach them to another database:
Go to https://dataclips.heroku.com/clips/recoverable. It will display your old database and a set of 'orphaned' dataclips and you can choose to transfer them to another database (in my case the promoted follower from the changeover).
Note that this only affects the dataclips that you created, it does not affect the dataclips one of your team members created and that you only had access to. So they will have to go through this process as well.
Official devcenter article: https://devcenter.heroku.com/articles/dataclips#dataclip-recovery
Thanks to Heroku CSRF measures, programmatically updating data clips is much more difficult than you might expect. You'll need to suck it up and start clicking buttons by hand, or beg their support team to do it for you, which is just as difficult.
There is no official support for programmatically moving the dataclips. That being said, you can script it out against their HTTP API.
The base URL is https://dataclips.heroku.com/api/v1/. There are three relevant endpoints:
clips /clips
resources (databases) /heroku_resources
move clip /clips/:slug/move
Find the slug of the clip you want to move, find the resource id of the new database, and make a post to the move clip endpoint:
POST /api/v1/clips/fjhwieufysdufnjqqueyuiewsr/move
Content-Type: application/json
{"heroku_resource_id":"resource123456789#heroku.com"}
I had over 300 dataclips to move. I used the following technique to update them all (essentially reverse engineering the dataclips API).
Open Chrome with Web Developer tools, Network tab.
Log into Heroku Dataclips
Observe the network call which returns all the dataclips, in JSON (https://dataclips.heroku.com/api/v1/clips). Take this response and extract out all dataclip slugs.
Update the database for one dataclip. Observe the network call which does this (https://dataclips.heroku.com/api/v1/clips/:slug/move). Right click, Copy as cURL. This is the easiest way to get all the correct parameters, since the API uses cookies for authentication.
Write a script that loops through each dataclip slug, and shells out to curl. In Ruby, this looks like:
slugs = <paste ids here>.split("\n")
slugs.each do |slug|
command = %Q(curl -v 'https://dataclips.heroku.com/api/v1/clips/#{slug}/move' -H 'Cookie: ...' --data '{"heroku_resource_id":"resource1234567#heroku.com"}')
puts command
system(command)
end
You can contact Heroku support, and they will bulk transfer the dataclips to your new database for you.
Batch working on dataclips
I've finally found a solution to work on my Dataclips as a batch using the javascript console and some scraping technique. I needed it to retrieve every dataclips. But it guess It can be updated as such:
// Go to the dataclip listing (https://data.heroku.com/dataclips).
// Then execute this script in your console.
// Be careful, this will focus a new window every 4 seconds, preventing
// you from working 4 seconds times the number of dataclips you have.
// Retrieve urls and titles
let dataclips = Array.
from(document.querySelectorAll('.rt-td:first-child a')).
map(el => ({ url: el.href, title: el.innerText }))
/**
* Allows waiting for a given timeout before execution.
* #param {number} seconds
*/
const timeout = function(seconds) {
return new Promise(resolve => {
setTimeout(() => {
resolve()
}, seconds);
})
}
/**
* Here are all the changes you want to apply to every single
* dataclip.
* #param {object} window
*/
const applyChanges = function(window) {
}
// With a fast connection, 4 seconds is OK. Dial it down if you
// have errors.
const expectedLoadTime = 4000 // ms
// This is the main loop, windows are opened one by one to ensure focus and a
// correct loading time.
for (const dataclip of dataclips) {
// This opens another window from the script, having access to its DOM.
// See https://github.com/buonomo/kazoo for a funnier example usage!
// And don't be shy to star and share :D
const externWindow = window.open(dataclip.url)
// A hack to wait for loading, this could be improved for sure.
await timeout(expectedLoadTime)
applyChanges(externWindow)
externWindow.close()
}
You'd still have to implement applyChanges yourself which I conceed is a bit tedious and I don't have time to do it know (if one does, please share!). But at least it can be done on all of your dataclips in a single function.
For an example usage of this script, you can take a look at the gist I made to scrape every dataclips and related errors.

Locale string comparison does not work properly in Firefox extension web worker

The localeCompare() function does not behave the same in a Firefox extension main code and in a web worker (or chrome worker).
For instance, in the main code, I have this code:
var array = ["École", "Frère", "frère", "école"];
array.sort(function(a, b) {
return a.localeCompare(b);
});
console.log('Main: ' + array);
it shows:
Main: �cole,�cole,Fr�re,fr�re
Which is the right sorting (the encoding is not my problem).
In the worker, I have this code:
var array = ["École", "Frère", "frère", "école"];
array.sort(function(a, b) {
return a.localeCompare(b);
});
self.postMessage(array);
it prints:
Frère,frère,école,�0cole
which is in the wrong order (once again, the encoding is not my problem).
The sorting in the main code is ok, but not the one in the web worker.
I tried to change the options of the localeCompare() function in the web worker, but it does not change anything.
Why is the sorting different in the web worker and how to get it right in the web worker?
(For some reason, I could not send the data to the main code, do the sorting and send it back to the web worker. I still got the wrong order (gives me école,�0cole,Frère,frère).)
Thanks for your help.
localeCompare is still broken in Firefox Web Workers.
Wladimir mentioned Bug 616841, which indeed fixed it almost everywhere... except for web workers, which were left broken because the Intl backend was (is?) not thread-safe, or some other thread-safety issues. The corresponding "Dead end" patch was never reviewed nor checked in.
I now filed Bug 903780, with a test case based on your code, so that localeCompare hopefully will be fixed in the future.

How to wait until the embedded ajax call is completed when injecting my script?

It's Google Chrome extension development. I'm using content script to inject into webpages. However some webpages have their own ajax scripts that change the content dynamically. How do I wait until such scripts are completed, since before their completion my script cannot obtain the correct content?
For example,
1- on Google search result page,
2- I want to append "text" to title of every search result item, which could be easily done by calling,
$('h3').append("text");
3- then listen to the search query change, done by
$('input[name="q"]').change( function(eventObj){
console.log("query changed");
// DOESN'T work
$('h3').append("text");
});
The last line doesn't work probably because at the time it's executed the page is still refreshing and $('h3') is not available. Google uses ajax to refresh the search result when the query is changed on the page.
So the question is how to capture this change and still be able to append "text" every time successfully?
EDIT:
Have tried and didn't work either:
$('h3[class="r"]').delay(1000).append("text");
and using .delay() is not really preferred.
EDIT:
.delay() is simply not designed to solve pause the execution of scripts other than UI effects. An workaround is
$('input[name="q"]').change(function(eventObj) {
setTimeout(function() {
$('h3[class="r"]').append(" text");
}, 1000);
});
But as I argued before, setTimeout() is connection-speed dependent, not preferred because I have to manually balance the time of waiting and the speed of response (of my script).
Although this post is down-voted for god-knows-why I'll still be waiting for an elegant answer.
Maybe with jQuery 1.7+ (or with older version using "live" or "delegate")
$('form').on( "change", 'input[name="q"]', function(eventObj){
console.log("query changed");
$('h3').append("text");
});
If form is another element, change it accordingly.

Resources