I'm using the Selenium Client (v 1.2.18) to do automated navigation of retail websites for which there exists no external API. My goal is to determine real-time, site-specific product availability using the "Check Availability" button that exists on a lot of these sites.
In case there's any concern, each of these checks will be initiated by a real live consumer who is actually interested in whether or not something's available at that store. There will be no superfluous requests or other internet badness.
I'm using Selenium's Grid framework so that I can run stuff in parallel and I'm keeping each of the controlled browsers open between requests. The issue I'm experiencing is that I need to perform these checks across a number of different domains, and I won't know in advance which one I will have to check next. I didn't think this would be too big an issue, but it turns out that when a Selenium browser instance gets made, it gets linked to a specific domain and I haven't been able to find any way to change what domain that is. This requires restarting a browser each time a request comes in for a domain we're not already linked to.
Oh, and the reason we're using Selenium instead something more light-weight (eg. Mechanize) is because we need something that can handle JavaScript.
Any help on this would be greatly appreciated. Thanks in advance.
I suppose you are restricted from changing domain because of same origin policy. Did you try using browser with elevated security privileges like iehta for internet explorer and chrome for firefox browsers. While using these modes of browsers, use open method in your tests and pass the URL which you want to open. This might solve your problem.
Related
I don't know if this question makes much sense but i would like to know if load testing can be done on different IE versions. For instance my product supports IE 8, 9 , 10.... Please clarify more on this.
Thanks.
When we do load testing, we are more interested in knowing the server performance. Not the client's browser performance.
JMeter itself acts like a browser. It sends the Http request & once it gets the response, it can display it like a browser(it does not execute javascript files though. It makes sense because javascript is going to be executed in client's machine.)
So, for your question, JMeter is not a tool for your requirement. You may have a look at the httpwatch which shows the page load time for the browser.
This only makes sense if
You build pages dynamically based upon browser type and version
If your load for your exceptional condition on page generation, which is typically early versions of IE, is significant
Simulating different browser versions is not always a requirement is performance testing, but could be needed in specific case. Do it if it really makes sense to do.
There are commercial tools which can easily simulate the different browser types/versions like LoadRunner, Microsoft Visual Studio etc. If you are going to use open source, trying mimicing the scenarios by providing browser details/properties in user agents header etc., just like the actual browser sends. You can capture this by checking the request headers from the browser (press F12 on chrome and see network)
The only thing you can do with JMeter is adding a HTTP Header Manager to send different User-Agent header value for each Internet Explorer versions. You can refer Understanding user-agent strings guide to see user agent values for different IE versions, however I don't think it will provide the full picture as JMeter doesn't actually "render" page.
So I would go the following way:
Use JMeter to put your server under expected load
Use one Selenium instance per Internet Explorer version to perform real browser testing of the system when it's under JMeter's load to measure performance differences.
Livelook.com "shares" screens without requiring shared parties to download nor install anything. These are my guesses on how it works:
Proxy server keeping track of shared parties who both are browsing the same website (thus not really screen sharing)
Interaction is being "recorded" by host side javascript and then serialized over to the client's javascript (ajax) who will then have to de-serialize the actions from the host and mimic the behaviour.
Anyone else has a take on this?
I did some research on this, and I'm pretty sure it uses Java since that is one of the requirements.
"To use LiveLOOK's screen sharing and co browsing products you will need a browser and java. Java is typically pre-installed on all computers. If your computer d..."
http://www.livelook.com/faq.asp
Here is perhaps how they get access to the screen:
http://www.daniweb.com/software-development/java/code/216988/java-code-to-capture-your-screen-as-image
But I'm a C++ man so, I wouldn't know if there's a better way to do things like that in Java.
I have a very basic knowledge of the web programming, and I couldn't figure out the answer to this question.
Typically, a cookie is used to identify a session in web applications. However, as far as I know, multiple browser windows share cookies. In that case, how does web applications distinguish between the tabs?
Edit: I guess all the answers are saying the same thing - there is no standard way to handle this. Ok. Thanks for resolving my uncertainty guys.
However, as far as I know, multiple browser windows share cookies. In that case, how does web applications distinguish between the tabs?
It cannot distinguish at all. Hence comes the challenge for web developers - make their application robust to prevent data destruction or unauthorized operations when access is performed from some other "outdated" tab.
They don't. The application does not know that you have two tabs open. If you are tracking sessions then it can know that you have two sessions open.
What are you trying to accomplish?
They don't. You have one cookie per different browser (ff/ie/ google chrome).
You'll have to distinguish the tabs client-side (ie Javascript).
I would like to setup a http proxy on my work machine (no admin rights, WinXP) to only allow access to a whitelist of URLs. What would be the easiest solution? I prefer open-source software if possible.
Squid seems to be the de facto proxy. This link describes how to set it up on a windows box: http://www.ausgamers.com/features/read/2638752
Why not use the Content Advisor in IE? You can provide a list of approved sites, anything else is blocked. Or do you want pass-through functionality like a true proxy?
Content advisor will ask for authorization every time a javascript function is called. At least that's my experience right now, and that's how I landed here, after hours of googling.
You are right, however, if the sites in the whitelist don't use javascript intensively and I would suggest that that option be tried first because (and I'm an IT person), it's FAAAAAAAAR easier to set up Content Advisor than a proxy server. Google "noaccess.rat" and you'll come accross articles that tell you how to set up IE using a white-list approach.
Having said this, however, you must be fully aware that Content Advisor can be easily disabled, even without knowing the password. One of my users did it in no time. You can find this in google as well.
Alex
My agile team will be adding new features to a existing realty website. As we add the features we want to have a better handle on the site's overall performance as well as the performance of particular pages.
I would like to automate the gathering of performance metrics on a request/response basis for each page (e.g. what sub requests are sent out by the browser, how many are there, how much data is transferred, and how long did each request take to fulfill).
Firebug currently captures this information in its net panel, however, I haven't found any way to programmatically pull this information out.
Does anyone know of a way to pull this information out after a page has loaded?
We are currently running our user acceptance tests with Selenium and I have considered adding this feature to the selenium interface so that our tests could run and collect the data without starting any other service.
All suggestions are welcome, including ones that leverage other tools/methods to gather the performance metrics.
Thank you.
Jan Odvarko has written a Tutorial on how to use the new listener functionality within Firebug to log net panel results:
"Since Firebug 1.4a13 the Net panel introduces, among other things, several new events that allow to easily collect all network requests and also related info gathered and computed by Firebug.
This functionality should be useful also in cases where Firebug extensions want to store network activity info into a local database or send it back to the server for further analysis (I am thinking about performance statistics here)."
Take a look at the NetExport extension for FireBug.
Steps:
enable autoexport in preferences( you can automate this one as well)
choose the folder where the data is to be added
Read the file
While it isn't directly a Firebug solution, perhaps something like Jiffy would help?
Jiffy pretty much works like a server based version of Firebug's measurement tools. I haven't used it in anger yet, but it may do what you're looking for?
http://code.google.com/p/jiffy-web/
Jiffy allows developers to:
measure individual pieces of page rendering (script load, AJAX execution, page load, etc.) on every client
report those measurements and other metadata to a web server
aggregate web server logs into a database
generate reports
There is a way to use ySlow to beacon out performance data to a URL of your choice. It's not well documented, the only info I found was here:
http://tech.groups.yahoo.com/group/exceptional-performance/messages/490?threaded=1&m=e&var=1&tidx=1
Aside from that I would look into writing a Firebug plugin, I think you can access most Firebug properties. Here's a tutorial: http://www.firephp.org/Reference/Developers/ExtendingFirebug.htm
Ben,
I've done this by extended Selenium RC's ProxyHandler to queue up the URLs seen and then allow you to pull them down via some other API. It requires that you proxy everything, which isn't the default behavior of Selenium. The nice thing is that Selenium ends up being both the place to drive automation and collect the results seen.
This is probably a feature we'll soon add to Selenium RC right after we get 1.0 out the door (we're very close!).
Okay I admit this is not a direct answer but how about going right to the source? Cut out FireBug and go to the web server. Can the server log events with sufficient granularity to allow calculation of the information you require? Parsing the log file into useful data should not be particularly difficult and has the advantage of being user-platform independent and has the potential to log a greater set of data than that offered by FireBug (Awesome tool btw).