I am working on test automation of a website that checks the content of Google Map Pin (typically address of the location). The website is deployed on multiple servers for the purpose of load-balancing; therefore, I have to test on all the servers.
I find that the xpath to access the Map Pin is different on two sets of servers. On one set of servers, it is:
.//*[#id='map']/div/div/div[1]/div[4]/div[4]/div/div[2]/div/div/div
and on the other set, it is
.//*[#id="map"]/div/div/div[1]/div[3]/div[2]/div[4]/div/div[2]/div[1]/div/div
I am puzzled as to why the xpaths would have different values. Does it mean that the underlying code implementation is different? The code on the two sets of servers was deployed two weeks apart. BTW, I am using the same version of Chrome driver and am running the tests on the same virtual machine.
Any insight is greatly appreciated.
Related
Im working on an aplication that's supposed to have multiple subdomains for multiple regions with geospatial data to be served saparately for each region. It should be hosted on one server hosting.
Demo of the app is nadlanu.gromatic.hr and opcina.gromatic.hr for other region.
I'm having problem to separate layer "Komunalni problemi" for the two regions as when the aplication is submitted (rigt menu - "Predaj prijavu") in opcina.gromatic.hr it shows only in nadlanu.gromatic.hr.
I've created separate layer with a different store (with different database) and workspace in Geoserver for that but, obviously, that doesn't work. So, I came to knowledge (correct me if I'm wrong please) that I need multiple instances of geoserver to solve this issue, but that wont work either due to one server hosting limitation, as I need more than 20 separated subdomains with separated geospatial data.
Thank you for the answers!
Hi since I was not aware of load testing I got a doubt at the time of learning.
hope if it is a not valid also pls let me help.
In jmeter we can simply record and do load test right. if that is the case if I load some unknown application with lots of load from my client side it might causes the server crashes right. then what should they do if server crashes of unknown person load test.
is there any specific things to do load test or simply we can do load test on any website .pls let me know this thing even my query is not a valid one also...thanks in advance
The majority of web applications are protected from DoS attacks therefore most likely you will not be able to "crash" the server, the traffic from your IP will simply get blocked and your IP will get banned.
Moreover, your action fall under Computer Misuse Act and you may be a subject to imprisonment up to 1 year and a fine up to 5000 pounds. The above law applies to the UK however I'm pretty much sure the equivalent exists in all the countries around the globe.
So don't load test the application without explicit permission of that application owner or you will run into a trouble.
Check out Websites Forbidden to Test Using BlazeMeter for explicit list of web sites you must not test by any means. There are some sites you can use for practicing like http://newtours.demoaut.com/ or http://blazedemo.com/ however I would recommend using something you can deploy locally as this is the safest way to practice load testing, moreover you will be able to see server-side impact of your test
I am looking to run the one test on multiple URLs using the selenium ide plugin for Firefox. My environment is load balanced, so I have the same website working on a number of servers. I will be testing internally, so I can access each server via their internal IP address (e.g. 192.168.1.1, 192.168.1.2, etc.). the purpose of the test I do is to check that servers are synchronized, by confirming that UI elements are there.
Is there a selenium command that will allow me to open a URL (e.g. 192.168.1.1), run a set of UI checks, then open the next URL (e.g. 192.168.1.2), and run the same UI checks again?
I currently change the base URL before every test to achieve this, but if I could automate this entirely, it will save me a lot of time (I have lots of different servers to hit).
Not with Selenium IDE. But you can generate code for various languages, the Selenium API in those languages do allow opening multiple, independent browser instances.
I have gone through the whole process of testing and setting up two or even three VMs under availability sets and load balancing endpoints, and I have noticed how when accessing the domain the different VMs instances are loaded since I put different titles on each instance of a CMS web site to test the availability. The main reason I am trying to look into this is that the current VM/web site has had some problems when Windows did their periodical updates, which at times stopped the FTP or changed the server settings.
While this is working almost the way I thought it would, my question is about what happens when a client, who this will be setup for, makes changes to a CMS web site. My thought is that if they make changes to the CMS then those changes only apply to one instance of the VMs in the availability set, and if the VMs are load balancing where the different VM instances are loading then multiple different changes could be applied to each VM in the Availability Set.
What I am trying to determine but not coming across anything concrete, is if there is away to setup a shared network or system to mirror any changes to the each VM so that the web site stays consistent. Or if using the Availability Set for the current VM and web site is still applicable.
If anyone can give me some insight that would be great.
Is using the server's file system necessary for the CMS software? Could the CMS software read/write content to/from a database instead?
If using the server file system is the only option, you could probably set up a file share on one server that all the other servers would work against. This creates the problem though that if the primary server (that containing the file share) goes down for some reason, the site is as well.
Another option could be to leverage Web Deploy to help publish the content changes. Here are two blog posts that discuss this further:
http://www.wadewegner.com/2013/03/using-windows-azure-virtual-machines-to-publish-and-synchronize-a-web-farm/
http://michaelwasham.com/2012/08/13/publishing-and-synchronizing-web-farms-using-windows-azure-virtual-machines/
This really depends on the CMS system you're using.
Some CMS systems, especially modern ones, will persist settings in some shared storage, like SQL Server database and thus any actions that users make to the CMS will be stored in this shared storage and available to all web servers that are housing the CMS.
Other CMS systems may not be compatible with load-balanced web servers. Doing file sharing/replication/etc of the files stored on local servers may or may not work, depending on the particular CMS and its architecture. I would really try to avoid this approach.
Most solutions I've read here for supporting subdomain-per-user at the DNS level are to point everything to one IP using *.domain.com.
It is an easy and simple solution, but what if I want to point first 1000 registered users to serverA, and next 1000 registered users to serverB? This is the preferred solution for us to keep our cost down in software and hardware for clustering.
alt text http://learn.iis.net/file.axd?i=1101
(diagram quoted from MS IIS site)
The most logical solution seems to have 1 x A-record per subdomain in Zone Datafiles. BIND doesn't seem to have any size limit on the Zone Datafiles, only restricted to memory available.
However, my team is worried about the latency of getting the new subdoamin up and ready, since creating a new subdomain consist of inserting a new A-record & restarting DNS server.
Is performance of restarting DNS server something we should worry about?
Thank you in advance.
UPDATE:
Seems like most of you suggest me to use a reverse proxy setup instead:
alt text http://learn.iis.net/file.axd?i=1102
(ARR is IIS7's reverse proxy solution)
However, here are the CONS I can see:
single point of failure
cannot strategically setup servers in different locations based on IP geolocation.
Use the wildcard DNS entry, then use load balancing to distribute the load between servers, regardless of what client they are.
While you're at it, skip the URL rewriting step and have your application determine which account it is based on the URL as entered (you can just as easily determine what X is in X.domain.com as in domain.com?user=X).
EDIT:
Based on your additional info, you may want to develop a "broker" that stores which clients are to access which servers. Make that public facing then draw from the resources associated with the client stored with the broker. Your front-end can be load balanced, then you can grab from the file/db servers based on who they are.
The front-end proxy with a wild-card DNS entry really is the way to go with this. It's how big sites like LiveJournal work.
Note that this is not just a TCP layer load-balancer - there are plenty of solutions that'll examine the host part of the URL to figure out which back-end server to forward the query too. You can easily do it with Apache running on a low-spec server with suitable configuration.
The proxy ensures that each user's session always goes to the right back-end server and most any session handling methods will just keep on working.
Also the proxy needn't be a single point of failure. It's perfectly possible and pretty easy to run two or more front-end proxies in a redundant configuration (to avoid failure) or even to have them share the load (to avoid stress).
I'd also second John Sheehan's suggestion that the application just look at the left-hand part of the URL to determine which user's content to display.
If using Apache for the back-end, see this post too for info about how to configure it.
If you use tinydns, you don't need to restart the nameserver if you modify its database and it should not be a bottleneck because it is generally very fast. I don't know whether it performs well with 10000+ entries though (it would surprise me if not).
http://cr.yp.to/djbdns.html