What is the best way to have readable ajax views errors in Django?
I can read them in Chrome's Response tab but only as plain text and a lot of information that is available on Django's standard error page is missing.
I have found the Django-Ajaxerror and sort of fixed it to run under Windows but it still doesn't work. How does everybody manage this problem in Django world?
Related
Once I've used firebug, but they stopped developing, and claimed, that many features where included in base firefox developers tools. So I've uninstalled the extension.
I miss one feature, where I could see my response as html. For now it's only raw or json.
Laravel produces error responses in html, it's hard to see the message in raw output.
Can someone point me, maybe extension, that adds html response feature or maybe tell me if there is an option somewhere?
Best regards!
p.s. I mean xhr response
Check the Network response of your request in the Inspect panel
I have a project in meanjs.
It has html5mode disabled so my URLS are like that:
http://localhost:3000/#!/products
I am trying to implement AJAX snapshoots in order to allow Google Crawlers to see content generated by javascript on client side.
I installed a module called MEAN-SEO:
http://blog.meanjs.org/post/78474995741/mean-seo
Now when I access the following URL:
http://localhost:3000/?_escaped_fragment_=
I am redirected to:
http://localhost:3000/?_escaped_fragment_=/#!/
And when I click on "products" or when I access directly, I am redirected to:
http://localhost:3000/?_escaped_fragment_=/#!/products
After reading the Google specification detailed here https://developers.google.com/webmasters/ajax-crawling/docs/getting-started , what I need is to get is something without hashbangs, like the following:
http://localhost:3000/?_escaped_fragment_=/products
What I am doing wrong?
Kind Regards.
Any specific reasons why you want html5mode off?
Here is something a lot of people have missed: Search engines (both Google and Bing) can now handle AJAX based content.
Their crawlers now understands pushstates, so if you just turn html5mode on you don't need any special handling to get your SEO working. You can load your content via AJAX, you can set title tags and meta tags with javascript and so on and so forth, and the crawlers will understand your content the same as if you had rendered things server-side. There is no need to do html-snapshotting or escaped_fragment handling for SEO anymore.
This has been announced on their developer blogs but unfortunately most of the documentation hasn't been updated with this information, so it's gone under the radar for a lot of people.
One word of warning though, Facebook does not handle pushstates, so if you want to support the Facebook crawler you still need to handle that separately.
I need to display some data(some text message) from a URL*(an HTML file)* which is in a different domain. I thought about using an iFrame to display the markup. Now the problem could be
if that site is down, then I wil see 404 error in that iFrame. i want to avoid that. I thought about using dojo to make an AJAX call to that URL to get the response, use innerHTML
to insert the response to the DOM. This is all what I need. But due to cross domain AJAX issues, I don't think it is possible. We are using dojo in our application. I searched
in Google to find a good implementation of Cross Domain scripting using Dojo. All I found is stuffs like JSONP. I don't want to make the remote domain return a JSONP. It is
just an HTML file and that file contains the markup that I need to print to the console. Can someone suggest a good way to do this.
Sadly as was already mentioned by Nakul in the comments, the same-origin policy does not allow for cross-domain XHR requests (at least in a cross-browser way).
The workarounds involve either cooperation from the cross-domain site (JSONP, CORS, various iframe communication tricks) or setting up a proxy in your own server so that all "cross-domain" go through your own domain first.
I am trying to find the optimal architecture for an ajax-heavy Django application I'm currently building. I'd like to keep a consistent way of doing forms, validation, fetching data, JSON message format but find it exceedingly hard to find a solution that can be used consistently.
Can someone point me in the right direction or share their view on best practice?
I make everything as normal views which display normally in the browser. That includes all the replies to AJAX requests (sub pages).
When I want to make bits of the site more dynamic I then use jQuery to do the AJAX, or in this case AJAH and just load the contents of one of the divs in the sub page into the requesting page.
This technique works really well - it is very easy to debug the sub pages as they are just normal pages, and jQuery makes your life very easy using these as part of an AJA[XH]ed page.
For all the answers to this, I can't believe no one's mentioned django-piston yet. It's mainly promoted for use in building REST APIs, but it can output JSON (which jQuery, among others, can consume) and works just like views in that you can do anything with a request, making it a great option for implementing AJAX interactions (or AJAJ [JSON], AJAH, etc whatever). It also supports form validation.
I can't think of any standard way to insert ajax into a Django application, but you can have a look to this tutorial.
You will also find more details on django's page about Ajax
Two weeks ago I made a write up how I implement sub-templates to use them in "normal" and "ajax" request (for Django it is the same). Maybe it is helpful for you.
+1 to Nick for pages displaying normally in the browser. That seems to be the best starting point.
The problem with the simplest AJAX approaches, such as Nick and vikingosegundo propose, is that you'll have to rely on the innerHTML property in your Javascript. This is the only way to dump the new HTML sent in the JSON. Some would consider this a Bad Thing.
Unfortunately I'm not aware of a standard way to replicate the display of forms using Javascript that matches the Django rendering. My approach (that I'm still working on) is to subclass the Django Form class so it outputs bits of Javascript along with the HTML from as_p() etc. These then replicate the form my manipulating the DOM.
From experience I know that managing an application where you generate the HTML on the server side and just "insert" it into your pages, becomes a nightmare. It is also impossible to test using the Django test framework. If you're using Selenium or a similar tool, it's ok, but you need to wait for the ajax request to go return so you need tons of sleeps in your test script, which may slow down your test suite.
If the purpose of using the Ajax technique is to create a good user interface, I would recommend going all in, like the GMail interface, and doing everything in the browser with JavaScript. I have written several apps like this using nothing but jQuery, state machines for managing UI state and JSON with ReST on the backend. Django, IMHO, is a perfect match for the backend in this case. There are even third party software for generating a ReST-interface to your models, which I've never used myself, but as far as I know they are great at the simple things, but you of course still need to do your own business logic.
With this approach, you do run into the problem of duplicating code in the JS and in your backend, such as form handling, validation, etc. I have been thinking about solving this with generating structured information about the forms and validation logic which I can use in JS. This could be compiled at deploy-time and be loaded as any other JS file.
Also, avoid XML. The browsers are slow at parsing it, it is a pain to generate and a pain to work with in the browser. Use JSON.
Im currently testing:
jQuery & backbone.js client-side
django-piston (intermediate layer)
Will write later my findings on my blog http://blog.sserrano.com
I am using RUBY to screen scrap a web page (created in asp.net) which uses gridview to display data. I am successfully able to read the data displayed on page-1 of the grid but unable to figure out how I can move to the next page in the grid to read all the data.
Problem is the page number hyperlinks are not normal hyperlinks (with URL) but instead are javascript hyperlink which causes postback to the same page..
An example of the hyperlink:-
6
I recommend using Watir, a ruby library designed for browser testing, if you're already using ruby for processing. For one thing, it gives you a much nicer interface to the DOM elements on the page, and it makes clicking links like this easier:
ie.link(:text, '6').click
Then, of course you have easier methods for navigating the table as well. It's easy enough to automate this process:
1..total_number_of_pages.each do |next_page|
ie.link(:text, next_page).click
# table processing goes here
end
I don't know your use case, but this approach has its advantages and disadvantages. For one thing, it actually runs a browser instance, so if this is something you need to frequently run quietly in the background in completely automated way, this may not be the best approach. On the other hand, if it's ok to launch a browser instance, then you don't have to worry about all that postback nonsense, and you can just click the link as if you were a user.
Watir: http://wtr.rubyforge.org/
You'll need to figure out the actual URL.
Option 1a: Open the page in a browser with good developer support (e.g. firefox with the web development tools) and look through the source to find where _doPostBack is defined. Figure out what URL it's constructing. Note that it might not be in the main page source, but instead in something that the page loads.
Option 1b: Ditto, but have ruby do it. If you're fetching the page with Net:HTTP you've got the tools to find the definition of __doPostBack already (the body as a string, ruby's grep, and the ability to request additional files, such as those in script tags).
Option 2: Monitor the traffic between a browser and the page (e.g. with a logging proxy) to find out what the URL is.
Option 3: Ask the owner of the web page.
Option 4: Guess. This may not be as bad as it sounds (e.g. if the original URL ends with "...?page=1" or something) but in general this is the least likely to work.
Edit (in response to your comment on the other question):
Assuming you're using the Net:HTTP library, you can do a postback by just replacing your get with a post, e.g. my_http.post(my_url) instead of my_http.get(my_url)
Edit (in response to danieltalsky's answer):
watir may be a really good solution for you (I'm kicking myself for not having thought of it), but be aware that you may have to manually fire the event or go through other hoops to get what you want. As a specific gotcha, with any asynchronous fetch like this you need to make sure that the full response has come back before you scrape it; that isn't a problem when you're doing the request inline yourself.
You will have to perform the postback. The data is pass with a form POST back to the server. Like Markus said use something like FireBug or the Developer Tools in IE 8 and fiddler to watch the traffic. But honestly this is a web form using the bloated GridView and you will be in for a fun adventure. ;)
You'll need to do some investigation in order to figure out what HTTP request the javascript execution is performing. I've used the Mozilla browser with the Firebug plugin and also the "Live HTTP Headers" plugin to help determine what is going on. It will likely become clear to you which requests you will need to make in order to traverse to the next page. Make sure you pay attention to any cookies getting set.
I've had really good success using Mechanize for scraping. It wraps all of the HTTP communication, html parsing and searching(using Nokogiri), redirection, and holding onto cookies. But it doesn't know how to execute Javascript, which is why you will need to figure out what http request to perform on your own.