HTTP vs HTTPS while loading Google Maps API - performance

Is there a performance impact in choosing between HTTP and HTTPS while loading Google Maps API?

Not to any significant degree.

Related

Google API backend error - if we use Google Cloud Client Library, would we see less Backend Errors?

When we use Google Apps Script to call the Google/YouTube API (such as YouTube API, YouTube Content ID API etc), 3 legged oAuth authentication approach, sometimes we got the message "backend error". If we tried again, the same call would be successful. The backend error rate sometimes is pretty high.
We also used (we also could use) Google Cloud Client Library and service account to call Google/YouTube API, 2 legged oAuth authentication approach to make the same API call.
Due to Google encourages us to use newer Cloud Client Library if we can ,instead of the older API library, I am wondering will the backend error rate going down if we use the Google cloud client library calling the Google API instead.
Or backend error is purely on Google Backend, it does not matter which library we use to call the API?
Thanks!
Google Cloud's Client Libraries can give you some performance benefits by using gRPC. This is because gRPC-enabled API clients use protocol buffers and gRPC over HTTP2 to talk to the RPC interface.
Protocol buffers are smaller and faster than using JSON over HTTP to the REST interface. So, in a way, they're better for everyone and can provide lots of benefits in terms of throughput and CPU usage.
But, if there's a fail after the backend's RPC interface, then there is no difference.
Also note that they could provide an exponential backoff strategy to handle errors and retries.

How to cache requests in HTTP/2

I'm developing a web application using Angular + Java, and I trying to improve performance with HTTP / 2 protocol. It's ok, but I do not have a significant performance improvement, as you can see:
[https://i.imgur.com/HXrWGlf.png][1]
The time to load a specific page is aproximatly the same. The difference is that using HTTP / 1.1, there is a caching resources. With HTTP / 2, there isn't caching, as we see bellow:
[https://i.imgur.com/WdSHp8q.png[2]
Can I cache resources in HTTP/2 to load the page more fast?
It's a Chrome issue with certificates SSL self-signed . Chrome don't cache resources to self-signed certificates. Here's the bug explanation: https://bugs.chromium.org/p/chromium/issues/detail?id=103875

How does a websocket cdn work with bidirectional data?

I see that cloudflare has a websocket cdn, but I'm confused with how it would cache bidirectional data. With a normal http request, it would cache the response and then serve it from the CDN.
With a web socket, how does cloudflare cache the data? Especially since the socket can be bi-dirctional.
Caching is really only a small part of what a CDN does.
CloudFlare (and really any CDN that would offer this service), would serve two purposes off the top of my hand:
Network connection optimization - The browser endpoint would be able to have a keepalive connection to whatever the closest Point of Presence (PoP) is to them. Depending on CloudFlare's internal architecture, it could then take an optimized network path to a PoP closer to the origin, or to the origin itself. This network path may have significantly better routing and performance than having the browser go straight to the origin.
Site consistency - By offering WebSockets, a CDN is able to let end users stay on the same URL without having to mess around with any cross-origin issues or complexities of maintaining multiple domains.
Both of these go hand in hand with a term often called "Full Site Acceleration" or "Dynamic Site Acceleration".

Secure Connection, NSURLSession to Django Rest API

More of a question of understanding rather than looking for a technical solution. I'm on a team working to build an iOS application that needs database support. We're using Swift for the application code. A Django REST API wraps a MySQL database on the backend. The two are communicating over HTTP using Swift's NSURLSession class.
We will be passing password information over one of the http requests, and so we want to up the requests to HTTPS. On the API side we can force traffic through SSL middleware using django-ssilfy.
My concern is that including this library does nothing on the client-side. As far as I know we will only need to change the url to include 'https://' rather than 'http://'. It seems that the data passed will only be secure once it reaches the API, rather than over the entire connection.
Is there anything we must do to secure the data being passed over the NSURLSession on Wi-Fi and mobile networks? Or is simply pointing the session at an API view that is throttled through a SSL port enough to ensure the request is secure?
Please let me know if I am way off track or if there is any steps other than django-ssilfy that I should take in order to make all http communication secure!
Go SO!
This question is more about whether or not SSL is secure, and less about if any of the tools that are being used make it less secure.
Luckily the Information Security Stack Exchange has your answer with an in-depth explanation as to how TLS does help secure your application.
When it comes to securing your Django site though, django-sslify is a good start, but it's not the magic cure to security issues. If you can, it's recommended to not serve insecure responses, which works well for API hosts (api.github.com is one example), but not if your API is hosted on the same domain as your front-end application. There are other Django apps available that are recommended, such as djang-secure (parts of which were integrated into Django 1.8).
You should also follow the Django security recommendations, and revisit them with new major releases. There are other resources you can look at, like "Is Django's built-in security enough" and many other questions on the Information Security Stack Exchange.

Is it better to use CORS or nginx proxy_pass for a RESTful client-server app?

I have a client-server app where the server is a Ruby on rails app that renders JSON and understands RESTful requests. It's served by nginx+passenger and it's address is api.whatever.com.
The client is an angular js application that consumes these services (whatever.com). It is served by a second nginx server and it's address is whatever.com.
I can either use CORS for cross subdomain ajax calls or configure the client' nginx to proxy_pass requests to the rails application.
Which one is better in terms of performance and less trouble for developers and server admins?
Unless you're Facebook, you are not going to notice any performance hit from having an extra reverse proxy. The overhead is tiny. It's basically parsing a bunch of bytes and then sending them over a local socket to another process. A reverse proxy in Nginx is easy enough to setup, it's unlikely to be an administrative burden.
You should worry more about browser support. CORS is supported on almost every browser, except of course for Internet Explorer and some mobile browsers.
Juvia uses CORS but falls back to JSONP. No reverse proxy setup.

Resources