With CasperJS, I would like to add some test coverage for https pages that load insecure resources over http and produce the following Chrome console error:
"The page at 'https://www.mysite.com/' was loaded over HTTPS, but displayed insecure content from 'http://cdn.mysite.com/images/chucknorris.gif': this content should also be loaded over HTTPS."
Using CasperJS, how would you write a test to crawl an array of paths on a specific domain, identify which of those have insecure resource errors and log the insecure resource console error to a file?
edit: as noted by Chris, fs = require('fs'); should be used to allow writing the logfile.
You need to look at var fs = require('fs'); by default JavaScript does not write to your file system or it could be a major security risk so enabling this feature from phantom will allow you to write cookies files etc... Aside from using that to write log files you would have to read the output from casperjs using something like Python.
As far as how to crawl the domain, you would need to evaluate the domain to figure out what urls do I need to visit and or what urls do I need to click to bounce from page to page. this is a domain specific question only you will be able to answer after your analysis.
Related
I have a webapp in production that interacts with Google Drive through Google Drive API.
I need to change some settings in Drive interaction but I can't save.
When I save the Drive UI integration page, I receive this error:
There's a problem at our end.
Please try again. If the problem persists, please let us know using
the "Send feedback" link below. Thanks!
(spying Network console: there is an Internal Server Error in a POST call)
I tried to send feedback for months: nobody answers and the bug is still there.
I tried also to create another project: I can save the first time but then the bug returns.
How can I do? Has someone the same problem?
Is there a way to receive a reply from Google? Is there some workaround?
Thank you.
i think that problem must be Client ID
before adding Client ID, go to the Credentials -> OAuth 2.0 Client IDs
then select edit your Client ID. after that your production site url add to Authorized JavaScript origins and Authorized redirect URIs.
then enter your Client ID in Drive UI integration page
For myself trying to get the Drive UI configured I noticed a couple of errors (that don't have any specific error messages)
When adding in an Open URL it has to be a valid domain, so for instance I tried to test it out with local host, to no avail. However something like https://devbox.app.com worked, but something like https://localhost:8888 does not. Even though https://localhost is a valid javascript origin in the client_id configuration (at least for the app I am working on, not sure about other apps), localhost doesn't work as an open URL.
When adding in the mimeTypes it needs to be in the format */* and can include custom mimeTypes like application/custom+xml and application/custom-name+json not sure for other custom types that are not in a particular format like xml or json. Also not sure about wildcards.
When adding in file extensions do not add in the '.' just the name of the file extension.
The app icon I found only failed to upload the image when the image wasn't the exact dimensions, I actually ended up editing some icons in photoshop to change the pixel x pixel values as a quick work around during dev.
That worked for me to get it to save and I tested it with a file that had a custom mimeType (application/custom-name+xml specifically) and custom file extension!
Is there a universal way to detect when a selenium browser opens an error page? For example, disable your internet connection and do
driver.get("http://google.com")
In Firefox, Selenium will load the 'Try Again' error page containing text like "Firefox can't establish a connection to the server at www.google.com." Selenium will NOT throw any errors.
Is there a browser-independent way to detect these cases? For firefox (python), I can do
if "errorPageContainer" in [ elem.get_attribute("id") for elem in driver.find_elements_by_css_selector("body > div") ]
But (1) this seems like computational overkill (see next point below) and (2) I must create custom code for every browser.
If you disable your internet and use htmlunit as the browser you will get a page with the following html
<html>
<head></head>
<body>Unknown host</body>
</html>
How can I detect this without doing
if driver.find_element_by_css_selector("body").text == "Unknown host"
It seems like this would be very expensive to check on every single page load since there would usually be a ton of text in the body.
Bonus points if you also know of a way to detect the type of load problem, for example no internet connection, unreachable host, etc.
WebDriver API doesnot expose HTTP status codes , so if you want to detect/manage HTTP errors, you should use a debugging proxy.
See Jim's excellent post Implementing WebDriver HTTP Status on how to do exactly that.
If you just need to remote-control the Tor Browser, you might also consider the Marionette framework by Mozilla. Bonus: It fails when a page cannot be loaded: (see navigate(url) in the API)
The command will return with a failure if there is an error loading
the document or the URL is blocked. This can occur if it fails to
reach the host, the URL is malformed, the page is restricted (about:*
pages), or if there is a certificate issue to name some examples.
Example use (copy from other answer):
To use with the Tor Browser, enable marionette at startup via
Browser/firefox -marionette
(inside the bundle). Then, you can connect via
from marionette import Marionette
client = Marionette('localhost', port=2828);
client.start_session()
and load a new page for example via
url='http://mozilla.org'
client.navigate(url);
For more examples, there is a tutorial.
I am working on upload file module that works on internet explorer only and it requires following browser setting mandatory:
"Include local directory path when uploading files to server" should be enabled.
A failure message "Unable to upload file" displays when we do not make required setting in browser while manual attempt else it works fine.
Now when I am trying to record the scenario in JMeter, getting same error message even I made required browser settings.
Note: Additionally, I tried to include these calls by copying from browser tools and paste it in JMeter transaction but get the same result in response.
Have any one experience the same or can help me out?
Thanks,
Nitin
Few things to consider:
Make sure that you use Java implementation of HTTP Request
Make sure that "Use multipart/form-data for POST" is checked
Make sure that you provide a file within "Send Files With the Request" field providing correct path, parameter name and mime-type
If all above has already been applied and you still experience problems I would recommend to capture the data being sent by Internet Explorer with a sniffer (Fiddler, Wireshark, etc.) and compare it to data, being sent by JMeter. They must be the same. If they aren't - you'll need to customize it using HTTP Header Manager, HTTP Cookie Manager, etc. If JMeter is not flexible enough to set all the required parameters via GUI, i.e. still tries to send full path of file instead of just filename or vice versa, you can always go deeper and manually build multi-part post request via Java Request Sampler (see SleepTest and JavaTest source code for details) or via Beanshell which is 100% compatible with Java syntax but may be harder to debug due to it's script nature.
Path to SleepTest and JavaTest files is follows:
/src/protocol/java/org/apache/jmeter/protocol/java/test/JavaTest.java
/src/protocol/java/org/apache/jmeter/protocol/java/test/SleepTest.java
JMeter sources are available from JMeter download page
Steps to upload the image using jmeter:
locate the image in bin folder Select Post method and check the "Use
multipart/form-data for POST" in HTTP Request
Provide details of Send files With the Request in HTTP Request
Record the Upload scenario and stop button in Jmeter (Image wouldn't upload while recording in Jmeter)
Now before running the script, Go to upload response and give the full path of image in "Send files with request field"
Now Run the script. You can able to see the image
Ok so since I applied an SSL Cert to our site the graphs in the dashboard have stopped working. i read this site
EDIT: forgot to add, im trying to get this working in the magento dashboard.
http://www.phpro.be/blog/detail/magento-dashboard-charts-not-working
which states to add "true" to the GetChartUrl() function within
app/design/adminhtml/default/default/template/dashboard/graph.phtml
this works on a site not using SSL.
I then found this site
http://webguru.org/2009/11/09/php/how-to-use-google-charts-api-in-your-secure-https-webpage/comment-page-1/#comment-988
but this supposedly opens up opportunity for SQL injection and other malicious attacks.
next I found this site
http://store.ivvy.ru/blog/chartssl/
and followed the instructions but still the charts aren't working.
i tried changing
const API_URL='http://chart.apis.google.com/chart';
to both
const API_URL='//chart.apis.google.com/chart';
const API_URL='https://chart.apis.google.com/chart';
but neither worked.
Can anyone point me to any other examples / explanations, or explain how to get this working?
Many Thanks
Do you use Firebug or another browser debug tool? If so, what is the error on the Console tab when you load the page containing the charts. I can tell you now, it's most likely due to trying to load a HTTP JS script over an HTTPS connection...which will fail.
Try using their latest API URL which supports HTTPS:
https://chart.googleapis.com/chart
Sometimes I come across an image that I can't scrape so that it can be saved. An example of this is:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487
When I hit the url from Internet Explorer I see the image but when I try to get it from the code below I get the following error message "System.Net.WebException The remote server returned an error: (403) Forbidden" error with GetResponse:
string url = "https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487";
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
Any ideas on how to get this image?
Edit:
I am able to get to save images that do have extensions. For example I can scrape the following image just fine:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12659/image/original.jpg?1326828951
Although HTTP is originally supposed to be stateless, there are a lot of implementations that rely on it being stateless. I could configure my webserver to only accept requests for "http://mydomain.com/sexy_avatar.jpg" if you provide a cookie proving you were logged in. If not, I send you a redirect 303 to "http://mydomain.com/avatar_for_public_use.jpg".
Amazon could be doing the same. Try to load the web page using Chrome, and look at the Network view in developer mode (CTRL+SHIFT+J) to see all headers supplied to the website. Maybe you even need to do a full navigation in the same session before you are allowed to see the image. This is certainly the case in many web applications I have developed :-)
Well, it looks like it's being generated from a script (possibly being retrieved from a database). The server should be sending a file/content type to go along with that... but it doesn't seem to be, which I believe is a violation of standards.
My Linux box knows full well that that's a JPEG image once it's on my hard drive, because it examines file headers rather than relying on extensions. Perhaps there is a tool to do the same in Windows?
Edit: Actually, on further contemplation, it seems odd that you'd get a 403 for that. Perhaps the server is actually blocking you from retrieving the file in that manner.