HTMLUnit not able to provide updated page on scroll - htmlunit

I am trying to get data from a page that dynamically appends data on load.
Specifications:
HTMLUnit : version : 2.14
But I am not able to get the new page after scroll. I tried using various browser versions and all possible code changes. It will be great if anyone can let me know what I am doing wrong.
Moreover document.documentElement.scrollTop; is always returning zero.
final WebClient webClient = new WebClient(BrowserVersion.FIREFOX_24);
webClient.getOptions().setThrowExceptionOnScriptError(false);
webClient.getOptions().setJavaScriptEnabled(true);
webClient.setAjaxController(new NicelyResynchronizingAjaxController());
webClient.waitForBackgroundJavaScript(60000);
HtmlPage page = webClient.getPage("http://www.snapdeal.com/products/mobiles-mobile-phones/?q=Brand:Samsung");
System.out.println(page.getTitleText());
final String pageAsXml = page.asXml();
System.out.println("Page1=\n" + pageAsXml);
String s = "window.scrollBy(0, window.innerHeight);document.documentElement.scrollTop;";
ScriptResult sr = page.executeJavaScript(s);
JavaScriptJobManager manager = page.getEnclosingWindow().getJobManager();
while (manager.getJobCount() > 4) {
System.out.println("SCript Job count = " + manager.getJobCount());
Thread.sleep(1000);
}
System.out.println("Result= " + sr.getJavaScriptResult() + "\n");
HtmlPage page2 = (HtmlPage) sr.getNewPage();
if(page == page2)
System.out.println("No difference");
else
System.out.println("Page2\n" + page2.asXml());
Thanks & Regards
Reeni

Related

eXist-db REST GET request for dynamic pdf - cannot read source file

(eXist 4.4, XQuery 3.1)
I offer the user the ability to download PDF documents which are dynamically created at the moment of request. The request has two parameters: the document name (ie doc=MS609-0002.pdf) and the document language version (ie lang=EN).
The function that outputs is in download.xql:
declare function download:download($node as node(), $model as map(*), $doc as xs:string, $lang as xs:string)
{
...
return response:stream-binary($pdf,"application/pdf", $filename)
}
It outputs a PDF fine in both a direct call in an IDE and if I call the function through an eXist HTML template, for example:
http://localhost:8081/exist/apps/deheresi/download?doc=MS609-0002.pdf&lang=EN
However, using HTML means opening another browser window.
Instead I'd like to request a REST GET from a button. I've looked at the eXist REST documentation and I can't get it to work.
According to the documentation, I should issue a GET structured as follows :
http://localhost:8081/exist/rest/db/deheresi/download.xql?doc=MS609-0002.pdf&lang=EN
But when make that request, I get :
HTTP ERROR 404
Problem accessing /exist/rest/db/deheresi/download.xql.
Reason: Document /db/deheresi/download.xql not found
This variation with /exist/rest/apps/: http://localhost:8081/exist/rest/apps/deheresi/download.xql?doc=MS609-0002.pdf&lang=EN
Returns the following message with a blank tree:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
And this variation with /exist/db/apps/: http://localhost:8081/exist/db/apps/deheresi/download.xql?doc=MS609-0002.pdf&lang=EN
Returns:
XQueryServlet Error
Error found
Message: Cannot read source file
/Applications/eXist-db.app/Contents/Resources/eXist-db/webapp/db/apps/deheresi/download.xql
I've tested file permissions and there seems to be no problem. Although there may be a REST permission/configuration requirement that I am not aware of? Are there issues with REST on localhost?
EDIT: this is the full function that should process the REST request:
xquery version "3.1";
module namespace get="/db/apps/deheresi/modules/download”;
declare namespace templates="http://exist-db.org/xquery/templates";
declare namespace tei="http://www.tei-c.org/ns/1.0";
declare namespace xsl = "http://www.w3.org/1999/XSL/Transform";
import module namespace xslfo = "http://exist-db.org/xquery/xslfo";
import module namespace document="/db/apps/deheresi/modules/document" at "/db/apps/deheresi/modules/document.xql";
import module namespace document-view="/db/apps/deheresi/modules/document-view" at "/db/apps/deheresi/modules/document-view.xql";
import module namespace document-preprint="/db/apps/deheresi/modules/document-preprint" at "/db/apps/deheresi/modules/document-preprint.xql";
import module namespace document-print="/db/apps/deheresi/modules/document-print" at "/db/apps/deheresi/modules/document-print.xql";
import module namespace functx="http://www.functx.com" at "/db/apps/deheresi/modules/functx.xql";
import module namespace globalvar="/db/apps/deheresi/modules/globalvar" at "/db/apps/deheresi/modules/globalvar.xqm";
declare function download:download($doc as xs:string?, $lang as xs:string?)
{ (: parse $doc to get name of XML to transform, send back pdf with same name :)
let $docset := upper-case(substring-before($doc,"."))
let $filename := concat($docset,".pdf")
let $document := doc(concat($globalvar:URIdata,concat($docset,".xml")))
let $language := if (lower-case($lang) = "fr")
then lower-case($lang)
else "en"
let $filename := concat($docset,".pdf")
(: get XSLT stylesheet :)
let $fostylesheet := document-print:single-doc-fo-stylesheet($language)
(: get XEP FO config:)
let $config := util:expand(doc("/db/apps/deheresi/xep.xml")/*)
(: get xml for transformation in correct language :)
let $xml := document-preprint:single-doc-preprint($document, $language)
(: create FO xml :)
let $fo := util:expand(transform:transform($xml, $fostylesheet, ()))
(: render pdf :)
let $pdf := xslfo:render($fo, "application/pdf", (), $config)
return response:stream-binary($pdf,"application/pdf", $filename)
};
NB: I've put a bounty on this in hopes of receiving an response which walks through the REST input and output function with an example of getting a PDF that is spontaneously generated. This includes any configuration / permission issues that could affect a REST request.
Since you state the PDF is returned when you call this:
http://localhost:8081/exist/apps/deheresi/download?doc=MS609-0002.pdf&lang=EN
Perhaps, what you should be doing is handling that response. A simple example would be this in jQuery using FileSaver.js. (You can google FileSaver.js and download and include that in your pages with jQuery):
function preview_cover(path){
var pdffilename="cover.pdf";
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function(){
if (this.readyState == 4 && this.status == 200){
saveAs(this.response, pdffilename);
}}
xhr.open('GET', 'cover-formatter.xq?cover=' + path + '&page_width=' + page_width + '&page_height=' + page_height);
xhr.setRequestHeader('Authorization','Basic ' + sessinfo);
xhr.responseType = 'blob';
xhr.send();
}
The above example will download the PDF using modern browsers (Chrome, Firefox, Edge).
The code behind is this (I snipped all the other stuff away, just leaving the formatting part):
let $fo := if ($territory = 'WALES') then util:expand(transform:transform($doc, doc("/db/EIDO/data/edit/xsl/EIDOcoverbilingual.xsl"), $parameters))
else util:expand(transform:transform($doc, doc("/db/EIDO/data/edit/xsl/EIDOcover.xsl"), $parameters))
let $pdf := xslfo:render($fo, "application/pdf", (), $config)
let $headers := response:set-header("Content-Disposition", "attachment;filename=document.pdf")
return
response:stream-binary($pdf, "media-type:application/pdf","document.pdf")
Below is a more lengthy jQuery Javascript code that attempts to handle the response at the Javascript side. There are a few tricks to note that I will mention first so as to understand. One hack is that iOS or IE9 browsers cannot handle binary downloads in the browser. So the server-side code actually has a hack to create the PDF and if the browser is iE9 or iOS, it stores the result in the DB (or AWS S3) and returns a link to that PDF so that it can be "clicked" to view. Other common browsers can automatically handle the binary data sent back if done correctly. For this we use FileSaver.js plugin Javascript that will download the PDF.
Other parts you can ignore frankly. Like logEvent which send an Event to Google Analytics, totformats variable tracks users downloads and limits them in any one session. The hack for Chrome downloads is likely not required as that was a bug in Chrome for Android. adding and loading 'loader' classes are for the GUI. The iE9, iOS solution using IP as a variable that is set, this is because the database is replicated and load balanced in many countries and because the data is written to the DB for this one call, we need the IP address of that exact server that has the result in it. This will go away with S3 integration.
Essentially the key is that is calls the same URL you would and saves the response using:
saveAs(this.response, pdffilename);
This is a call into FileSaver.js which handles saving the binary data from an XHR GET and downloading it for you. I have snipped this out from a much larger code which handles all of the downloads including dynamically generated ones from RenderX as yours is, but also static PDFs.
The call is straightforward, just a GET to customer-formatter.xq which is the same in my case as calling http://localhost/customer-formatter.xq (because I strip out /exist and my post for Jetty is 80):
xhr.open('GET', 'customer-formatter.xq?masterlang=' + masterlang + '&doclang=' + doclang + '&specialty='+ specialty + '&article=' + docnum + '&user_name=' + loggedInUser + '&territory=' + territory + '&expiry=' + expiry + '&page_width=' + page_width + '&page_height=' + page_height + '&column_count=' + column_count + '&phrasechange=' + phrasechange + '&genlink=' + genlink + '&access=' + access + '&scalefont=' + scalefont + '&skin=' + skin + '&watermark=' + watermarkmsg +'&timestamp=' + timestamp);
totformats++;
if (totformats > maxformats)
window.location.href = '/user?logout=logout';
var docfilename = ((doclang) ? doclang : '') + ((doctype) ? doctype : '');
var pdffilename = docnum + '-' + docfilename + '.pdf';
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function(){
if (this.readyState == 4 && this.status == 200){
// Do IE9 stuff or iPhone/iPad
if (version == 9) {
var ip = this.responseText;
var a = document.createElement("a");
a.style = "cursor: pointer;";
document.body.appendChild(a);
var url = 'http://' + ip + '/IE9/' + loggedInUser + '-' + docnum + '-English.pdf';
a.href = url;
$(a).attr('target','_blank');
a.click();
$(a).remove();
$(doc).removeClass('loader');
$(doc).prop('disabled',false);
}
else if (isiOS) {
var ip = this.responseText.trim();
ioswindow.location.href = 'http://' + ip + '/IE9/' + loggedInUser + '-' + docnum + '-English.pdf';
$(doc).removeClass('loader');
$(doc).prop('disabled',false);
}
// Hack to partially fix Chrome error, file is now in Chrome downloads
else if (Math.max(document.documentElement.clientWidth, window.innerWidth || 0) <= 1024 && window.chrome) {
var blob = new Blob([this.response], {type: 'application/pdf'});
var a = document.createElement("a");
a.style = "display: none";
document.body.appendChild(a);
var url = window.URL.createObjectURL(blob);
a.href = url;
a.download = pdffilename;
a.click();
window.URL.revokeObjectURL(url);
$(doc).removeClass('loader');
$(doc).prop('disabled',false);
}
else {
saveAs(this.response, pdffilename);
$(doc).removeClass('loader');
$(doc).prop('disabled',false);
}
}
}
xhr.open('GET', 'customer-formatter.xq?masterlang=' + masterlang + '&doclang=' + doclang + '&specialty='+ specialty + '&article=' + docnum + '&user_name=' + loggedInUser + '&territory=' + territory + '&expiry=' + expiry + '&page_width=' + page_width + '&page_height=' + page_height + '&column_count=' + column_count + '&phrasechange=' + phrasechange + '&genlink=' + genlink + '&access=' + access + '&scalefont=' + scalefont + '&skin=' + skin + '&watermark=' + watermarkmsg +'&timestamp=' + timestamp);
xhr.setRequestHeader('Authorization','Basic ' + sessinfo);
if (isiOS)
xhr.responseType = 'text';
else
xhr.responseType = 'blob';
xhr.send();
logEvent(docnum, doclang, 'format', specialty, source, docname);
Your download:download function is written in such a way that it works with eXist-db templating. I would suggest abstracting the actual download logic into a separate function in a separate library module.
You can then have your download:download function call your abstracted download logic function, and you can also create a new main module like direct-download.xq or whatever which just processes the URL and then calls your abstracted download logic function.

Can't get past two-step security with HtmlUnit

I'm new to HtmlUnit (using version 2.30). Working in Eclipse on a Mac. I'm trying to create a stock data scraper by logging onto my Ameritrade account and manipulating the watch lists I've created there. First login form leads to the two-step security page where the challenge question is asked. I don't know why/how the site knows that it wants to challenge my userid/password in the first place. Because it looks like a new browser?
But anyway I fill out the form on the second page with the answer to the challenge question and submit. Instead of taking me to my account's home page, it once again takes me to the two-step security page with the same challenge question. Here is the relevant code:
final int sleepMinSeconds = 1;
final int sleepRandomSeconds = 4;
final long javascriptTimeout = 10000;
System.out.println("HtmlUnitTest");
String applicationName = "Mozilla";
String applicationVersion = "5.0 (Windows NT 6.3; WOW64; rv:56.0) Gecko/20100101 Firefox/56.0";
final String userAgent = applicationName + "/" + applicationVersion;
BrowserVersion browserVersion = new BrowserVersion.BrowserVersionBuilder(BrowserVersion.FIREFOX_52)
.setApplicationName(applicationName)
.setApplicationVersion(applicationVersion)
.setUserAgent(userAgent)
.build();
WebClient webClient = new WebClient(browserVersion);
java.util.logging.Logger.getLogger("com.gargoylesoftware.htmlunit").setLevel(java.util.logging.Level.ALL);
java.util.logging.Logger.getLogger("org.apache.commons.httpclient").setLevel(java.util.logging.Level.ALL);
webClient.setIncorrectnessListener(new com.gargoylesoftware.htmlunit.IncorrectnessListener() {
#Override public void notify(String arg0, Object arg1) {}
});
webClient.setJavaScriptErrorListener(new com.gargoylesoftware.htmlunit.javascript.JavaScriptErrorListener() {
#Override public void timeoutError(HtmlPage arg0, long arg1, long arg2) {}
#Override public void scriptException(final HtmlPage arg0, final com.gargoylesoftware.htmlunit.ScriptException arg1) {}
#Override public void malformedScriptURL(HtmlPage arg0, String arg1, java.net.MalformedURLException arg2) {}
#Override public void loadScriptError(HtmlPage arg0, java.net.URL arg1, Exception arg2) {}
});
webClient.setCssErrorHandler(new com.gargoylesoftware.htmlunit.SilentCssErrorHandler());
webClient.getOptions().setThrowExceptionOnFailingStatusCode(false);
webClient.getOptions().setThrowExceptionOnScriptError(false);
webClient.getOptions().setDoNotTrackEnabled(true);
webClient.getOptions().setActiveXNative(true);
webClient.getOptions().setRedirectEnabled(true);
webClient.getOptions().setPrintContentOnFailingStatusCode(true);
webClient.getCookieManager().setCookiesEnabled(true);
webClient.getOptions().setDownloadImages(true);
String loginURL = "https://www.tdameritrade.com/home.page";
System.out.println("Connecting to " + loginURL + " (" + webClient.getBrowserVersion() + ")");
System.out.print(" Waiting to avoid being detected as a robot...");
Thread.sleep((long)(Math.random()*sleepRandomSeconds) * 1000);
System.out.print(" Done waiting.\n");
HtmlPage page = webClient.getPage(loginURL);
System.out.println("title text: " + page.getTitleText());
System.out.print(" \nWaiting for Javascript to complete...");
webClient.waitForBackgroundJavaScript(javascriptTimeout);
System.out.println("\nOK");
System.out.print(" Waiting to avoid being detected as a robot...");
Thread.sleep((long)(sleepMinSeconds + Math.random()*sleepRandomSeconds) * 1000);
System.out.print(" Done waiting.\n");
System.out.println("Logging in...");
HtmlForm form = page.getFormByName("form-login");
HtmlTextInput useridField = form.getInputByName("tbUsername");
HtmlPasswordInput passwordField = form.getInputByName("tbPassword");
useridField.type("<userid>");
passwordField.type("<password>");
HtmlButton button = form.getButtonByName("");
System.out.println("button value: " + button.getValueAttribute());
// Did this to make sure I had right button, which was unnamed.
// Value is "Log in", so I proceed.
HtmlPage page2 = button.click();
System.out.print(" \nWaiting for Javascript to complete...");
webClient.waitForBackgroundJavaScript(javascriptTimeout);
System.out.println("\nOK");
System.out.print(" Waiting to avoid being detected as a robot...");
Thread.sleep((long)(sleepMinSeconds + Math.random()*sleepRandomSeconds) * 1000);
System.out.print(" Done waiting.\n");
HtmlElement element = page2.getHtmlElementById("loginBlock");
HtmlForm form2 = element.getEnclosingForm();
HtmlPasswordInput challengeField = form2.getInputByName("challengeAnswer");
if(page2.asXml().contains("boss")) {
System.out.println("boss question...");
challengeField.type("<answer to boss question>");
}
else if(page2.asXml().contains("street")) {
System.out.println("street question...");
challengeField.type("<answer to street question>");
}
else {
System.out.println("What?");
}
HtmlCheckBoxInput checkBox = form2.getInputByName("rememberDevice");
checkBox.setChecked(true);
HtmlInput button2 = form2.getInputByName("mAction");
System.out.println("button2 value: " + button2.getValueAttribute());
// value here is "submit" - so I proceed
HtmlPage page3 = button2.click();
System.out.print(" \nWaiting for Javascript to complete...");
webClient.waitForBackgroundJavaScript(javascriptTimeout);
System.out.println("\nOK");
webClient.close();
In other words, page2 and page3 are the same, i.e., the two-step security page. I expected page3 to be my account's home page. (I confirmed this by writing them both out as XML to separate files.) I would appreciate any help I can get on this! Thanks!
OK, lets start with some comments to your code.
Not sure what you like to archive by using a different browser setup instead of using the build in default ones. There is nothing wrong with this but be aware that changing the browser setup will NOT have any effect on the browser behavior (e.g. the supported js functionality).
Second: if you are hunting for bugs/problems i fear it is a bad idea to disable all the listeners. It might be worth to have this listener output...
Regarding all the options: Why not starting with the default setup, it is really close to real browsers.
Now some words about the login process:
It will be a great help to try to understand the login process of the real application. All this 'modern' web applications are doing a lot of strange things (async/javascript) to simulate a rich ui on the not that rich platform.Tools like Charles WebProxy are of real help to get an idea of the communication done behind the scenes.
One common problem with this HtmlPage page3 = button2.click(); API is that the click method returns the sync result of the click. If the button is one of this fancy Ajax buttons, this is usually the page of the button itself. You are already waiting for the Ajax stuff to finish but the page will not change if there is a ajax redirect to a new page. In this case you have to do something like this after the wait call.
// there is an ajax redirect that loads a new page into this window
page3 = (HtmlPage) page.getEnclosingWindow().getEnclosedPage();
Hope that helps a bit...

Why does Firefox trim the response file name?

The following code accesses a helper method which creates and returns an EPPlus ExcelPackage, then returns the package to the browser:
public ActionResult DownloadMatrixExcel(int projectId)
{
try
{
// Get project details
var project = (from p in db.Projects
where p.ProjectId == projectId
select new
{
companyName = p.Company.Name,
projectName = p.Name
}).Single();
// Must append file type to file download responses
var fileName = project.projectName + "-" + project.companyName + "-" + DateTime.Now.ToString("yyyyMMdd", CultureInfo.InvariantCulture) + ".xlsx";
// Configure response
Response.Clear();
Response.BufferOutput = false;
Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
Response.AddHeader("content-disposition", "attachment; filename=" + fileName);
// Create and populate excel package
var matrixSpreadsheet = ExcelHelper.BuildMatrixExcel(projectId);
matrixSpreadsheet.SaveAs(Response.OutputStream);
}
catch (Exception e)
{
return Content("Error: " + e.Message);
}
// Download okay - No ViewResult
return new EmptyResult();
}
Works fine in every browser I have tested but FireFox 18.0.1 (have yet to test other FF versions) trims the file name at the first space, so "someproject - somecompany - thedate" is just "someproject". I can do a Replace and remove spaces but this makes some file names look a bit odd. File extension seems to be intact and no other issues but wondered if anyone could offer an explanation or fix?
You should place the filename between quote characters ("filename").
Okay, found the answer here while researching another issue: File Download issue in FireFox only
Response.AddHeader("Content-Disposition",
string.Format("attachment; filename = \"{0}\"",
System.IO.Path.GetFileName(FileName)));
This will also give the file the correct content type when you choose to save rather than open in browser in FireFox.

HtmlUnit Session Management

I'm trying to login to Facebook page using HtmlUnit and view its HTML content. I'm trying to fill up the login credentials through HtmlUnit but I don't see the session being carried when the submit button is clicked.
Couldnt find much content on htmlunit session management classes. I have also attached the code that I'm currently using to attempt this problem. Any help appreciated!
WebClient webClient = new WebClient();
HtmlPage page1 = webClient.getPage("https://www.facebook.com");
List<HtmlForm> listF = page1.getForms();
HtmlForm form = null;
for(int i=0; i<listF.size(); i++)
{
if(listF.get(i).getId().equals("login_form"))
{
form = listF.get(i);
}
}
HtmlTextInput uName = form.getInputByName("email");
HtmlPasswordInput passWord = form.getInputByName("pass");
HtmlSubmitInput button = form.getInputByValue("Log In");
uName.setValueAttribute(FACEBOOK_UNAME);
passWord.setValueAttribute(FACEBOOK_PASS);
HtmlPage page2 = button.click();
Found the answer. Just enabled cookies before starting to get webpages. It works.
Added the below piece of code
WebClient webClient = new WebClient();
CookieManager cookieMan = new CookieManager();
cookieMan = webClient.getCookieManager();
cookieMan.setCookiesEnabled(true);
Another advice is if you are trying to restore HTTP session in HtmlUnit, instead of using webClient.getCookieManager().addCookie(cookie);, use this instead :
webClient.addCookie("cookieName=cookieValue", URL, null);

Same Ajax is not working in IE more than one time

i my webpage when the user click forgot password button i ask email , Securitykey etc.. when the user click the sendmail button i send the email,securitykey, etc to a ajax function named 'sendmail(par1,par2,par3)' [Code is below]. The user provide Existing mailid , securitykey... , rtstr[1] is set to 1 [one] . So the 'Mail send successfully' was displayed . But if the user again enter the info [without refreshing the page]and click sendmail button, it didn't work in IE. But it works perfectly in Firefox.
var xmlhttp1;
xmlhttp1 = GetXmlHttpObject();
function sendmail(Mailforpwd, Secquestion, Secanswer) {
if (xmlhttp1 == null) {
alert("Browser does not support HTTP Request");
return;
}
var url = "SendEmail.php";
url = url + "?Email=" + Mailforpwd;
url = url + "&Squestion=" + Secquestion;
url = url + "&Sanswer=" + Secanswer;
xmlhttp1.onreadystatechange = stateChanged;
xmlhttp1.open("GET", url, true);
xmlhttp1.send(null);
function stateChanged() {
if (xmlhttp1.readyState == 4) {
var Result = xmlhttp1.responseText;
rtstr = Result.split('#');
//alert(xmlhttp1.responseText);
//alert(rtstr[0]);
//alert(rtstr[0]);
if (rtstr[0] == 1) {
document.getElementById("Errorcredentials").innerHTML = "Mail send successfully";
}
else if (rtstr[1] == 0) {
//alert(document.getElementById("Errorcredentials").innerHTML);
document.getElementById("Errorcredentials").innerHTML = "Please provide Exist information";
}
else {
document.getElementById("Errorcredentials").innerHTML = "There is a problem in sending mail, please try after sometime";
}
}
}
}
function GetXmlHttpObject() {
if (window.XMLHttpRequest) {
// code for IE7+, Firefox, Chrome, Opera, Safari
return new XMLHttpRequest();
}
if (window.ActiveXObject) {
// code for IE6, IE5
return new ActiveXObject("Microsoft.XMLHTTP");
}
return null;
}
Here my problem is at second time the function stateChanged() was not called , if i put a alert in this function , first time it display alert box , But next time it won't. This is my problem . The sendMail.php was called eachtime .
Whenever I have this problem it is because IE caches your AJAX request. The best way to avoid this is to append a random number as a key in your query string each time.
url = url + "&rand=" + Math.random();
Or, better, since your AJAX request appears to be causing some action to happen server-side, why don't you use HTTP POST instead of GET?
xmlhttp1.open("POST", url, true);
This is a caching problem. Append current date time to your url to make it unique.
url = url + "&rand=" + (new Date());
just swap these lines in your code.
xmlhttp1.onreadystatechange = stateChanged;
xmlhttp1.open("GET", url, true);
after fixing it looks like
xmlhttp1.open("GET", url, true);
xmlhttp1.onreadystatechange = stateChanged;
thats it!!
skypeid: satnam.khanna

Resources