I'm creating a web application using parse and have found that in order for a user to authenticate I need to make all requests using HTTPS. I'm able to switch this over and get it to work correctly, but when I do I get all kinds of mixed content errors because I'm retrieving PFFile objects which only return a non-secure URL.
This wouldn't even be a huge concern with Chrome or Safari but of course IE needs to present a message to the user and block all this content. Are there any potential work arounds? Why can't parse just put a setting in the app to enable files to be served from a secure url? This seems completely ridiculous. How do people get around this? Are you completely avoiding the use of PFFile?
Replace http:// with https://s3.amazonaws.com/.
So if you start with this:
http://files.parsetfss.com/b05e3211-bf8b-.../tfss-fa825f28-e541-...-jpg
The final url will look something like this:
https://s3.amazonaws.com/files.parsetfss.com/b05e3211-bf8b-.../tfss-fa825f28-e541-...-jpg
Related
We are trying to use Cobalt (20.stable) browser as the browser of our web SPA application.
My requirement is to be able to change URL at runtime, what I was able to find in the code:
Is:
starboard::shared::starboard::Application::Link(const char* link_data)
which ends up sending:
kSbEventTypeLink
Unfortunately this is not working, as code is ignoring the call; the handling reaches the point:
// TODO: Remove this when terminal application states are properly handled.
if (deep_link_event->IsH5vccLink()) {
browser_module_->Navigate(GURL(deep_link_event->link()));
}
In my case I m trying to change the URL to let say https://www.example.com.
There should be a way to do that as when navigating we can always have a link that will cause the browser to go to some URL?
Porting layer is not supposed to control navigation directly. Instead, your Starboard implementation may send a deep link event which could be intercepted by a web app which will perform a navigation. See h5vcc_runtime.idl for Web API.
That said, if you are building an SPA, why do you even need to change a URL? Initial URL of a web app is controlled by --url command line switch.
When you say runtime are you looking to change the initial URL when the app is first launched? If so, you could just use the --url parameter.
So you could do the following:
cobalt --url="https://www.example.com"
I did a patch to allow changing the URL.
I just need to call starboard::shared::starboard::Application::Link("https://www.example.com").
Inside this call a DeepLinkEvent is posted.
Patch : https://gofile.io/?c=9GvNHX
Cobalt does not navigate for you. The JavaScript receives the deeplink with the function it sets on h5vcc.runtime.onDeepLink and then does whatever it wants with that. As a SPA, it will parse the URL and load new content from its server in its own internal data format (e.g. protocol buffers, JSON, etc.) which it uses to update its own DOM to show new content.
Navigating is not the point of a SPA since that makes it not be a single page application. However, there may be cases such as a loader app that will want to make some initial decisions then load the actual SPA. That loader app would have to have the appropriate CSP rules in place, then set window.location to the URL of the page to navigate to.
Note: The code you found in Application::OnDeepLinkEvent() is a remnant that previously supported the H5vccURLHandler, which was removed in Cobalt 20. It's not meant to navigate to arbitrary deeplinks.
So I have a site where I upload large video files using html 5 to azure storage using an sas signature. It seems to work fine on most systems and browsers but doesn't seem to work on iPhones. I finally routed the call through fiddler via proxy and got the response from the storage server.
Here is the CORs rule I have set up.
What am I missing here?
Good evening,
There are a few things I would like you to try simultaneously, when you get a chance:
1. Change your Allowed Headers to: "Origin,X-Requested-With,Content-Type,Accept,Authorization,Accept-Language,Content-Language,Last-Event-ID,X-HTTP-Method-Override, x-ms-*". NOTE: You may not need all of these, but for now, add them all to see if we can get it working.
2. Change your Allowed Methods to: NONE, PUT, OPTIONS
3. Set the Max Age (seconds) field to 0
4. Create another rule, and do not have a comma-separated list of allowed origins. Make a separate rule for each origin. (I've heard of certain browsers not liking the CSV).
Once all is said and done, if the above does not work, try removing "Authorization" from the allowed headers since it looks like you are not using that (but first, try it with it).
Please let me know if you make any progress with the above ideas.
I'm trying to load some external content using jQuery load function to div on my page. load method works ok, with local content, but if you want something out of your domain, it won't work.
$("#result").load("http://extrnal.com/page.htm #data);
(it actually works in IE with security warning, but refuses to work in Chrome at all). jQuery documentation says that it is right, because cross-domain content is restricted because of security reasons. Same warning I get if use .getJSON method.
OK, after a googling a bit I found very interesting approach of using YQL for loading content, I've tried some examples, like this:
var request = "http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D%22http%3A%2F%2Ffinance.yahoo.com%2Fq%3Fs%3Dyhoo%22&format=json&diagnostics=true&callback=?";
$.getJSON(request, function (json) {
alert(json);
}
);
And it really works!
What I dont understand now is that http://query.yahooapis.com is also cross-domain resouce but browser (both IE and Chrome) works OK with that?
Whats the difference? What am I missing?
Thank you
The results you are getting back from YQL are in JSON format which is permitted for cross site AJAX calls like this. Its the same mechanism that allows you to communicate with web services for external sites via JSON (Ie. the twitter API).
Details here - http://www.wait-till-i.com/2010/01/10/loading-external-content-with-ajax-using-jquery-and-yql/
you can make on external site JSON like this:
callback({key:value,etc:1})
and define
function callback(json) {
..here is processing..
}
Thanks for your answers, but unfortunately both of them do not answer my orginal question..
I've checked out related questions on stackoverflow (i know i need to do that first) and found the reason of such behavior.
First code snipset uses AJAX/JSON to retrive the data and it is permitted because of Same Origin Policy. But request to YQL uses JSONP instead, that is OK.
The JSONP was something that I don't know about, that's why I didn't undrestand the behaviour.
Introduction info on JSONP could be found here:
http://ajaxian.com/archives/jsonp-json-with-padding
I'm new and just developing on J2EE.
I am modifying an existing application (an OpenSource project).
I need to save an image on a client sent by the server, but I do not know how.
This activity must be done in a transparent manner without affecting the existing operation of the application.
From the tests done I get this error:
java.lang.IllegalStateException: getWriter () has Already Been Called for this response.
How should carry out this task, according to your own opinion?
How do I save on the client, locally, the image?
Update:
Thanks for the answers.
My problem is that:
the image is generated on the server, but not for direct client request (there is no link to click on web page), the picture is composed using other services on the Internet.
reconstruct the image on the server.
This image must be sent to the client to be saved locally.
so I'd like it to appear a window where you assign the destination image
plus I'd like the rest of the application were not affected by this activity.
The application is yet on production.
Thank you very much for your response.
From the tests done I get this error: java.lang.IllegalStateException: getWriter () has Already Been Called for this response.
In other words, you were trying to mix the binary data of the image with the character data of the HTML output, or you were trying to do this in a JSP instead of a Servlet. This is indeed not going to work. You need to send either the image or the HTML page exclusively in response to fully separate requests.
In your JSP/HTML page just have a link to the image, like so:
click to download image
Then, in a servlet listening on an url-pattern of /imageservlet/*, you just get the image as InputStream from some datasource (e.g. from local disk file system as FileInputStream) and then write it to the OutputStream of the response the usual Java IO way.
You only need to set at least the Content-Disposition response header to attachment to make sure that the client get a Save As popup dialogue, else it will be displayed straight in the browser. Setting the Content-Type and Content-Length are also important so that the browser knows what the server is sending and can predict how long the download may take.
response.setHeader("Content-Type", getServletContext().getMimeType(file.getName()));
response.setHeader("Content-Length", String.valueOf(file.length()));
response.setHeader("Content-Disposition", "attachment;filename=\"" + file.getName() + "\"");
You can find complete basic servlet example in this article.
Note: you cannot control where the client would save the image, this would be a security hole. This way websites would be able to write malicious files on client's disk unaskingly.
Update: as per your update, there are two options:
You need to let the client itself fire two HTTP requests (I've answered this in your subsequent question)
Create a client side application which does all the task directly at the client side and then embed this in your webpage, for example a Java Applet. With an applet you have full control over the client environment. You can execute almost all Java code you'd like to execute and you can write files to disk directly without asking client for the location to save. You only need to sign the applet by a 3rd party company or the client needs to confirm a security warning before running.
Its up to the browser how all types of output are handled. Web pages are given a content type of html which the browser understands and ends up rendering ass a page that we can see. Images are given content type of image/jpeg etc which are rendered as images when in a page etc. To force a download prompt one needs to use a content type of a binary file rather than that of an image so the browser forces the download rather than shows the image. To ensure this use something like "application/octetstream"... i cant recall exactly but its easy to google for.
I'm doing an AJAX download that is being redirected. I'd like to know the final target URL the request was redirected to. I'm using jQuery, but also have access to the underlying XMLHttpRequest. Does anyone know a way to get the final URL?
It seems like I'll need to have the final target insert its URL into a known location in the headers or response body, then have the script look for it there. I was hoping to have something that would work regardless of the target though.
Additional note: I'm asking how my code can get the full url from production code, which will run from the user's system. I'm not asking how I can get the full url when I'm debugging.
The easiest way to do this is to use Fiddler or Wireshark to examine the HTTP traffic. Use Fiddler at the client if your interface uses a browser, otherwise use Wireshark to capture the traffic on the wire.
One word - Firebug, it is a Firefox plugin. Never do any kind of AJAX development without it.
Activate Firebug and select Net, then perform your AJAX request. This will show the URL that is called, the entire request (header and body) and the entire response (once again, header and body). It also allows you to step through your JavaScript and debug it - breakpoints, watches, etc.
I'll second the Firebug suggestion. You'll see the url as the "Location" header in the http response.
It sounds like you also want to get this url in js? If so, you can get it off the xhr response object in the callback (which you can also inspect using FB!). :)