Cross domain content usage from client script (security issues) - ajax

I'm trying to load some external content using jQuery load function to div on my page. load method works ok, with local content, but if you want something out of your domain, it won't work.
$("#result").load("http://extrnal.com/page.htm #data);
(it actually works in IE with security warning, but refuses to work in Chrome at all). jQuery documentation says that it is right, because cross-domain content is restricted because of security reasons. Same warning I get if use .getJSON method.
OK, after a googling a bit I found very interesting approach of using YQL for loading content, I've tried some examples, like this:
var request = "http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D%22http%3A%2F%2Ffinance.yahoo.com%2Fq%3Fs%3Dyhoo%22&format=json&diagnostics=true&callback=?";
$.getJSON(request, function (json) {
alert(json);
}
);
And it really works!
What I dont understand now is that http://query.yahooapis.com is also cross-domain resouce but browser (both IE and Chrome) works OK with that?
Whats the difference? What am I missing?
Thank you

The results you are getting back from YQL are in JSON format which is permitted for cross site AJAX calls like this. Its the same mechanism that allows you to communicate with web services for external sites via JSON (Ie. the twitter API).
Details here - http://www.wait-till-i.com/2010/01/10/loading-external-content-with-ajax-using-jquery-and-yql/

you can make on external site JSON like this:
callback({key:value,etc:1})
and define
function callback(json) {
..here is processing..
}

Thanks for your answers, but unfortunately both of them do not answer my orginal question..
I've checked out related questions on stackoverflow (i know i need to do that first) and found the reason of such behavior.
First code snipset uses AJAX/JSON to retrive the data and it is permitted because of Same Origin Policy. But request to YQL uses JSONP instead, that is OK.
The JSONP was something that I don't know about, that's why I didn't undrestand the behaviour.
Introduction info on JSONP could be found here:
http://ajaxian.com/archives/jsonp-json-with-padding

Related

Parse.com Mixed Content Error

I'm creating a web application using parse and have found that in order for a user to authenticate I need to make all requests using HTTPS. I'm able to switch this over and get it to work correctly, but when I do I get all kinds of mixed content errors because I'm retrieving PFFile objects which only return a non-secure URL.
This wouldn't even be a huge concern with Chrome or Safari but of course IE needs to present a message to the user and block all this content. Are there any potential work arounds? Why can't parse just put a setting in the app to enable files to be served from a secure url? This seems completely ridiculous. How do people get around this? Are you completely avoiding the use of PFFile?
Replace http:// with https://s3.amazonaws.com/.
So if you start with this:
http://files.parsetfss.com/b05e3211-bf8b-.../tfss-fa825f28-e541-...-jpg
The final url will look something like this:
https://s3.amazonaws.com/files.parsetfss.com/b05e3211-bf8b-.../tfss-fa825f28-e541-...-jpg

how to call route like new request

I try to call my own api with laravel I tried all the options like:
in the answers:
Consuming my own Laravel API
or in this blog:
http://blog.antoine-augusti.fr/2014/04/laravel-calling-your-api/
and this plugin:
https://github.com/teepluss/laravel4-hmvc
sorry for sound dumb I dont understand all the things but they not make full 100% real request like
for example with normal ajax post request i can do this to get some parameter:
Request::createFromGlobals()->request->get('some-parameter')
but with the others techniqe this just return null
I am using this way becase thats the way how "thephpleague/oauth2-server" plugin get the post parametrs
( https://github.com/thephpleague/oauth2-server/blob/master/src/AuthorizationServer.php#L234 )
so my question: is there a real way to call other route and to make it 100% look like normal request?
From what it sounds like you are trying to do you could use Snoopy or cURL to make actual calls.
The real question is why do these need to be actual calls? The first link you posted has some really great examples that would work well for any calls that go directly back to your Laravel application. That article linked to a module called HMVC which keeps your code looking neat and simplifies the process.
Any HTTP Client will do this.
Guzzle
Buzz
Requests

Response rendered as json in IE for browsable apis

On IE when i try to browse the rest apis, i am getting a application/json response instead of api (text/html) response (Returns html response on firefox). I am using django restframework 2.2.5 for this purpose.
I read through the documnets and understood that in order to overcome the problem of broken headers for IE we need to use TemplateHTMLRenderer explicitly in the view, so i have added the following to the class definition of my view but still i am getting a json response. Am i not doing it correctly or i am missing something else?
class CustomReports(generics.GenericAPIView):
`renderer_classes = (renderers.TemplateHTMLRenderer)`
Can you please help in fix the problem so that i get html response in case of IE as well?
Which version of IE are you using? I believe newer versions of IE should send correct Accept headers.
I probably wouldn't bother trying to fix things up to work around IE's broken behavior, but instead just make sure that you're including format suffixes in your urls. Then you can simply use the .api suffix to see the browseable API, or the .json suffix to see the plain json.
Eg instead of http://127.0.0.1:8000/api-root/, use http://127.0.0.1:8000/api-root/.api.

How to obtain firefox user agent string?

I'm building an add-on for FireFox that simulates a website, but running from a local library. (If you want to know more, look here)
I'm looking for a way to get a hold of the user-agent string that FireFox would send if it were doing plain http. I'm doing the nsIProtocolHandler myself and serve my own implementation of nsIHttpChannel, so if I have a peek at the source, it looks like I'll have to do all the work myself.
Unless there's a contract/object-id on nsHttpHandler I could use to create an instance just for a brief moment to get the UserAgent? (Though I notice I'll need to call Init() because it does InitUserAgentComponents() and hope it'll get to there... And I guess the http protocol handler does the channels and handlers so there won't be a contract to nsHttpHandler directly.)
If I have a little peek over the wall I notice this globally available call ObtainUserAgentString which does just this in that parallel dimension...
Apparently Firefox changed how this was done in version 4. Have you tried:
alert(window.navigator.userAgent);
You can get it via XPCOM like this:
var httpHandler = Cc["#mozilla.org/network/protocol;1?name=http"].
getService(Ci.nsIHttpProtocolHandler);
var userAgent = httpHandler.userAgent;
If for some reason you actaully do need to use NPAPI like you suggest in your tags, you can use NPN_UserAgent to get it; however, I would be shocked if you actually needed to do that just for an extension. Most likely Anthony's answer is more what you're looking for.

Is there a way to see the final URL retrieved by an XMLHttpRequest?

I'm doing an AJAX download that is being redirected. I'd like to know the final target URL the request was redirected to. I'm using jQuery, but also have access to the underlying XMLHttpRequest. Does anyone know a way to get the final URL?
It seems like I'll need to have the final target insert its URL into a known location in the headers or response body, then have the script look for it there. I was hoping to have something that would work regardless of the target though.
Additional note: I'm asking how my code can get the full url from production code, which will run from the user's system. I'm not asking how I can get the full url when I'm debugging.
The easiest way to do this is to use Fiddler or Wireshark to examine the HTTP traffic. Use Fiddler at the client if your interface uses a browser, otherwise use Wireshark to capture the traffic on the wire.
One word - Firebug, it is a Firefox plugin. Never do any kind of AJAX development without it.
Activate Firebug and select Net, then perform your AJAX request. This will show the URL that is called, the entire request (header and body) and the entire response (once again, header and body). It also allows you to step through your JavaScript and debug it - breakpoints, watches, etc.
I'll second the Firebug suggestion. You'll see the url as the "Location" header in the http response.
It sounds like you also want to get this url in js? If so, you can get it off the xhr response object in the callback (which you can also inspect using FB!). :)

Resources