Appending Arabic Text from AJAX XML Response Freezes Chrome - ajax

I am working with a legacy site that grabs some XML content via AJAX, constructs a block of HTML code with it, and then appends it to a blank div. The XML makes heavy use of Arabic text.
It seems to work fine in all browsers except Chrome. In Chrome, page loading will die at the point of appending the string to the div. When I remove the Arabic text from the XML, the page loads just fine.
The HTML being generated has the following meta tag:
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
and the XML has this encoding tag:
<?xml version="1.0" encoding="UTF-8"?>
Here is a sample of the XML that is being passed:
<segment>
<content>السَّلامُ‮ ‬عَلَيْكُم‮.‬</content>
<linked>true</linked>
<glossWord>السَّلامُ‮ ‬عَلَيْكُم</glossWord>
<glossTrans>Hello. (Literally "Peace be upon you").</glossTrans>
<glossExpl>This is a very commonly used greeting. It works for any time of the the day. It can also be used to mean 'goodbye'.</glossExpl>
</segment>
Interesting tidbit, when I went to create this question in Chrome, pasting the above into the form ALSO broke Chrome, and the browser froze solid. I had to reopen and submit it in Firefox. If this is a bug in Chrome it would be nice to be able to find a way to work around it, as I don't really like the idea of telling people, "Don't use X browser" to access a site.

Had a similar issue, and it turned out to be Google Translate in Chrome on 10.6.8 having issues when I used multiple languages/characters. I got around this by adding the class "notranslate" onto html elements that I didn't want Google Translate to bomb out on.
To quickly see if this works for you, add the class "notranslate" to your body and see if the page stops hanging. Hope this works for you!

Related

Is there anyway to load a local file into Ti.WKWebView?

Using https://github.com/appcelerator-modules/Ti.WKWebView I create my 'html content' on the fly using data from a JSON feed, and then pipe that into the WKWebView which works beautifully WKWebView.setHtml(myContent);
Now with a standard ti web view I have a line in the myContent variable (basically a giant string of HTML) that says <link rel="stylesheet" type="text/css" href="myfiles/style.css" /> which would style things wonderfully.
This however doesn't work on WKWebview. Upon further trial and error I see that I can't include any local files into WKWebView this way. If I set it to a remote file on a web server, then it works just fine.
But for some reason it doesn't want to pull up local files. Is this something with the design of Ti.WKWebView or am I doing something wrong?
Praying hansemannnn reads these pages as I can see no other way of creating an issue on Github

IE document mode

<meta http-equiv="X-UA-Compatible" content="IE=8,chrome=1" />
i've put this between my <head></head>
it works in my IE(11),
but my client who uses IE(8),the document mode always use ie7,
and all the other in his company who uses IE8 show the same problem too, not just him.
i saw some people say usingcontent="IE=edge"is a way
but i can't ,i must use content="IE=8"to run other thing...
is there any way to fix it? to make the browser used document mode in ie8?

Can I make my ajax website 'crawlable'?

I'm currently building a music based website and I want to build something like this template. It uses ajax and deep linking. (And it makes use of the History.js library - please notice how there's no '#' in the URLs.)
The reason I want to use these 'ajaxy' methods (or maybe use the template altogether) is so that when music is playing, it will remain un-interrupted as the user navigates the site.
My worry is that my site wont be crawlable by Google but I think I can modify code in the page source to fix that. If I look at the source code to the template, in the head I see
<meta name="description" content="">
<meta name="author" content="">
<meta name="keywords" content="">
Now if I add this to the head:
<meta name="fragment" content="!">
will that make the site crawlable? Is there other code I need to add on top of this? Or is it just not possible for this template?
I'm following this guide https://developers.google.com/webmasters/ajax-crawling/docs/getting-started, and I'm on step 3. I will of course have to complete the other steps but I don't know I'm heading in the right direction, or heading towards a dead end!
Any help would be very much appreciated. Many thanks in advance.
From what you said it sounds like your site updates the address bar with clean urls as you navigate via ajax. That's good. The next thing is you want to do is make sure those urls work. If you directly go to a url do you see the specific content it represents. And would a crawler also see the correct content without running javascript. Progressive enhancement works well for that. The final thing is you want to do is make sure bots can pick up those urls.
I've not played with the meta tag for ! But it looks like it is only for the home page and you still need to implement the escaped fragment page. Maybe it does support other pages but the article does not cover that.

How to scrape ajax generated content from JSF-Site?

I am currently playing around with different scraping techniques and found out, that it can get pretty complicated quickly when a lot of javascript is involved.
I had some success with HTMLUnit which seems to interpret javascript rather well, but I am looking for a more lightweight solution.
So the problem I am facing now is: I want to retrieve the results of a specific page, which is generated by an ajax call by a click on a certain button.
The call itself is rather simple, just a HTTP Post to a certain URL with a few parameters submitted in the post body. The problem I have now is that the server complains when I submit the HTTP Post to the ajax function without really opening the containing site.
What I basically do for testing is:
curl -v -d "AJAXREQUEST=..." https://myhost/ajaxurl
An what I get is:
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta name="Ajax-Response" content="true" />
<meta name="Ajax-Expired" content="View state could't be restored - reload page ?" />
</head>
</html>
The server is running JSF 1.2. What do I have to do, to get the results from the AJAX call? I am not really a JSF expert...
If I had to guess, JSF doesn't have a session associated with the request being sent with curl and therefore the objects associated with the page don't exist. For curl look at http://curl.haxx.se/docs/httpscripting.html section 10, cookies. You would have to pull the page, get the cookies then do the http post with the cookies (starts being a lot of work with curl).
However I would instead suggest looking at Selenium, which has a IDE that generates Java to interact with JavaScript.

Ajax magic: How is Kotaku achieving Ajax *and* Google accessability?

Kotaku has launched a new design without hashbangs. Their site still clearly uses ajax requests, but somehow it is still found through Google and the content shows up in the pagesource. How do they do it? Their text seems to be contained inside a script type=text/javascript, but I don't understand what effect that has, or why they would do that.
(of course, the first page request may just trigger a static, serverside constructed response. But check other articles, it does load json through an ajax request. No page refresh)
Have a look at this site for example:
http://kotaku.com/5800326/read-some-of-new-tomb-raider-game-right-now
No hashes, a very well formed URL and it appears in Google. I have read the Google Ajax guide, and as far as I understand it, Google only requests an html snapshot iff you use #! inside your url.
For your convenience, I have made a screenshot that shows how the text looks inside the Chrome debugger: (what does "ganjaAjaxContent" mean?)
If you search for this article, it is the first match in Google:
Google search for Kotaku article
Being able to do ajax without having to worry about Google search would be excellent.
Kotaku and the other Gawker sites are doing a number of things for SEO:
Submitting XML sitemaps for all of their content
http://kotaku.com/sitemap_today.xml
http://kotaku.com/sitemap.xml
Correct use of title and description tags for Google and Facebook
<title>Read Some of New Tomb Raider Game Right Now</title>
<meta name="fragment" content="!">
<meta name="title" content="Read Some of New Tomb Raider Game Right Now" />
<meta name="description" content="Upcoming Tomb Raider reboot doesn't have a release date yet, but website Siliconera apparently has the game's script and published what's reportedly an excerpt from it. Check it out. [Siliconera]" />
<meta property="og:title" content="Read Some of New Tomb Raider Game Right Now" />
<meta property="og:description" content="Upcoming Tomb Raider reboot doesn't have a release date yet, but website Siliconera apparently has the game's script and published what's reportedly an excerpt from it." />
Displaying HTML post content when Javascript is turned off (inspect the <div class="post-body quick-post"></div> element)
So you're right, Google's first visit loads the semantic, accessible serverside-constructed page. WHile Google can crawl hashbang pages, it doesn't need to, because all of the pages are indexed via the sitemap.xml
Hope this answers all of your questions.
p.s. having said all this, hashbangs are still bad for the web
http://www.tbray.org/ongoing/When/201x/2011/02/09/Hash-Blecch
http://isolani.co.uk/blog/javascript/BreakingTheWebWithHashBangs
http://blog.benward.me/post/3231388630

Resources