Building an app using a calendar on a Google Apps domain that has SSL enforced domain-wide. I initially found the problem when building a Rails app using the GCal4Ruby library, which used the allcalendars feed URL with a non-SSL protocol (GCal4Ruby debug output snippet [sic]):
…
url = http://www.google.com/calendar/feeds/default/allcalendars/full
Starting post
Header: AuthorizationGoogleLogin auth=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxGData-Version2.1
Redirect recieved, resending get to https://www.google.com/calendar/feeds/default/allcalendars/full?gsessionid=xxxxxxxxxxxxxxxxxxxxxx
Redirect recieved, resending get to https://www.google.com/calendar/feeds/default/allcalendars/full?gsessionid=xxxxxxxxxxxxxxxxxxxxxx
Redirect recieved, resending get to https://www.google.com/calendar/feeds/default/allcalendars/full?gsessionid=xxxxxxxxxxxxxxxxxxxxxx
…
This was interesting because it seemed to continue forever. I think I've fixed this in GCal4Ruby locally by creating the ability to use the allcalendars feed with the HTTPS protocol (i.e: https://www.google.com/calendar/feeds/default/allcalendars/full).
The thing that worries me is that I see no mention of the allcalendars feed needing to specify the HTTPS protocol in the Google documentation. That, and the fact that when I access the same domain using the Zend GData library in PHP, it works fine accessing the non-SSL private feed (i.e. http://www.google.com/calendar/feeds/r-calendar.com_xxxxxxxxxxxxxxxxxxxxxxxxxxx%40group.calendar.google.com/private/full).
So, the question: What am I misunderstanding? Is it just the allcalendars feed that needs to be accessed with SSL and the rest of the private feeds can safely use the authentication token?
Anyone have any insight, or pointers to some good docs?
So, it looks like perhaps redirect to the normal URL is normal for authentication, but that the library is not handling the redirect correctly because of some differences between the way Google Apps and Accounts work on the backend. This is in contrast to the Zend library, which seems to handle this more robustly. That's my current guess, anyway.
Related
I have github pages website which makes request to a hosted server which is HTTP and my browser blocks it.
Its assignment for university so I don't really want to pay for HTTPS on the server and I can't use something else for the front-end as I have sent the url to my professor expecting that this is where my web app will be hosted.
Is there anything I can do, which doesn't involve paying that much money?
I encountered this same issue when making API calls. Here's an easy workaround:
Get your api endpoint (for this example, let’s say
http://api.catphotos.io/)
Prepend the CORS API link https://cors-anywhere.herokuapp.com/ to
your link
Make sure to leave the http:// on your endpoint.
Your new link should look like this:
https://cors-anywhere.herokuapp.com/http://api.catphotos.io/
Make your API calls with your new link!
Source: johnalcher.me
According to the GitHub Pages help docs:
HTTPS enforcement is required for GitHub Pages sites created after June 15, 2016 and using a github.io domain.
That means you can't disable the https:// redirect. You could use a custom domain which then leaves the https:// stuff up to you.
Or you could move the stuff from the other server to GitHub Pages, assuming it's not too big.
Is it ok to pass passwords like this or should the method be POST or does it not matter?
xmlhttp.open("GET","pas123",true);
xmlhttp.send();
Additional info: I'm building this using a local virtual web server so I don't think I'll have https until I put upfront some money on a real web server :-)
EDIT: According to Gumo's link encodeURIComponent should be used. Should I do xmlhttp.send(encodeURIComponent(password)) or would this cause errors in the password matching?
Post them via HTTPS than you don't need to matter about that ;)
But note that you need that the page which sends that data must be accessed with https too due the same origin policy.
About your money limentation you can use self signed certificates or you can use a certificate from https://startssl.com/ where you can get certificates for free.
All HTTP requests are sent as text, so the particulars of whether it's a GET or POST or PUT... don't really matter. What matters for security in transmission is that you send it via SSL (and handle it safely on the other end, of course).
You can use a self-signed cert until something better is made available. It will be a special hell later if you don't design with https in mind now :)
It shouldn't matter, the main reason for not using GET on conventional web forms is the fact that the details are visible in the address bar, which isn't an issue when using AJAX.
All HTTP requests (GET/POST/ect) are sent in plain text so could be obtained using network tracing software (e.g. Wireshark) to protect against this you will need to use HTTPS
We have a need to consume an external REST Api and dynamically update content on our website and have ran into the age old problem of cross site scripting and Ajax.
I've read up on JSONP however I don't want to go down that route in a million years as it seems like really a rather dirty hack.
As a solution to this issue is it "right" and "proper" to have a local service that acts as a proxy for any requests to an external Api? So on the client there would be an Ajax call to ../RestProxy/MakeRequest passing it the details of the request it needs to make to the external api, it performs the request and returns anything passed back.
Any thoughts would be appreciated.
There are three ways to do this:
1. JSONP
This is accepted by many popular APIs and frameworks. JQuery makes it easy. I would recommend this.
2. Proxy
Works pretty much as you described. Adds an extra step and server code and server load for you. However, it does allow you to filter or otherwise manipulate the results before sending them to the client.
3. Rely Access-Control-Allow-Origin
This is a header that the server can set to allow you to read json directly from their server even though you aren't on the same domain. This eliminates the need for the jsonp hack, but it requires the the server be setup to support it and it requires a web browser that supports it.
Access-Control-Allow-Origin is supported in:
IE8+
Firefox 3.6+
Safari 4.0+
Chrome 6+
iOS Safari 3.2+
Android browser 2.1+
If you need to support IE7, then this option isn't for you.
If I want a client to always use a HTTPs connection, do I only need to include the headers in the code of the application or do I also need to make a change on the server? Also how is this different to simply redirecting a user to a HTTPs page make every single time they attempt to use HTTP?
If you just have HTTP -> HTTPS redirects a client might still try to post sensitive data to you (or GET a URL that has sensitive data in it) - this would leave it exposed publicly. If it knew your site was HSTS then it would not even try to hit it via HTTP and so that exposure is eliminated. It's a pretty small win IMO - the bigger risks are the vast # of root CAs that everyone trusts blindly thanks to policies at Microsoft, Mozilla, Opera, and Google.
Hey SO, so I've got an API I'm making calls to in a browser application. Said API lives on a server that requires whitelisting and HTTP Digest Authentication.
To meet the whitelisting requirement, I'm running all API calls through a proxy, which is whitelisted. The calls are originating from an iFrame, currently populated by an index.html file.
What I need to know is how I can authenticate via HTTP Digest in the background. Most of the resources I can find online seem to involve the original HTTP Digest Authentication setup, but what I'm looking to do is automate login.
Despite the non-secretive subject matter, it is somehow critical that I keep the digest parameters obfuscated from users. Perhaps I could change the served file to index.php and then somehow set the magic headers? Even then, if the calls made via XHR, would the index.php headers authenticate the separate request?
Overall, I'm just lost, and the API developers in question are not exactly responsive, so thought I'd turn here.
It appears that in the end, this was not possible. I had to switch to building a thin back-end to route requests through.