Why we need HTTPS when we send result to user - performance

The reason we need HTTPS(Secured/Encrypted Data over network):
We need to get the user side data(Either via form or by URL which ever way users sends their data to server via network) securely Which is done by http + ssl encryption - so in that case only the form or which ever URL that user posting/sending data to server has to be secure URL and not the page that I am sending to browser[ Eg. When I need to have customer register form From server itself I have to send it as https url - if I dont do that then browser will give warning like mixed content error. Instead is it wrong that browsers could have had some sort of param to mention the form I have has to be secure url.
In some cases my server side content cant be read by anyone outside other than who I allow to be - for that I can use https to deliver the content with extra security measurements in server side.
Other than these two scenarios I dont see any reason on having https based encoded content over network. Lets assume a site with 10+ css, 10+ js, 50+ images with 200k of content weight and total weight may be ~2 - 3MB - so this whole content is encrypted - have no doubt this is going to be min. of 100 - 280 connection creation between browser and server.
Please explain - why we need to follow the way we deliver[Most of us doing because browsers/google like search engines/w3o standards asks us to use on every page].

why we need to follow the way we deliver
Because otherwise it's not secure. The browsers which warn about this are not wrong.
Let's assume a site with 10+ css, 10+ js
Just 1 .js served over non-HTTPS and a man-in-the-middle attacker could inject abitrary code into your HTTPS page, from which origin they can completely control the user's interaction with your site. That's why browsers don't allow it, and give you the mixed content warning.
(And .css can have the same impact in many cases.)
Plus it's just plain bad security-usability to switch between HTTP and HTTPS for different pages. The user is likely to fail to notice the switch, and may be tricked into entering data into (or accepting data from) a non-HTTPS page. All the attacker would have to do would be to change one of the HTTP links so it pointed to HTTP instead of HTTPS, and the usual process would be subverted.
have no doubt this is going to be min. of 100 - 280 connection creation between browser and server.
HTTP[S] reuses connections. You don't pay the SSL handshake latency for every resource linked.
HTTPS is really not that expensive today to be worth worrying about performance for a typical small web app.

Related

How to validate that a certain domain is reachable from browser?

Our single page app embeds videos from Youtube for the end-users consumption. Everything works great if the user does have access to the Youtube domain and to the content of that domain's pages.
We however frequently run into users whose access to Youtube is blocked by a web filter box on their network, such as https://us.smoothwall.com/web-filtering/ . The challenge here is that the filter doesn't actually kill the request, it simply returns another page instead with a HTTP status 200. The page usually says something along the lines of "hey, sorry, this content is blocked".
One option is to try to fetch https://www.youtube.com/favicon.ico to prove that the domain is reachable. The issue is that these filters usually involve a custom SSL certificate to allow them to inspect the HTTP content (see: https://us.smoothwall.com/ssl-filtering-white-paper/), so I can't rely TLS catching the content being swapped for me with the incorrect certificate, and I will instead receive a perfectly valid favicon.ico file, except from a different site. There's also the whole CORS issue of issuing an XHR from our domain against youtube.com's domain, which means if I want to get that favicon.ico I have to do it JSONP-style. However even by using a plain old <img> I can't test the contents of the image because of CORS, see Get image data in JavaScript? , so I'm stuck with that approach.
Are there any proven and reliable ways of dealing with this situation and testing browser-level reachability towards a specific domain?
Cheers.
In general, web proxies that want to play nicely typically annotate the HTTP conversation with additional response headers that can be detected.
So one approach to building a man-in-the-middle detector may be to inspect those response headers and compare the results from when behind the MITM, and when not.
Many public websites will display the headers for a arbitrary request; redbot is one.
So perhaps you could ask the party whose content is being modified to visit a url like: youtube favicon via redbot.
Once you gather enough samples, you could heuristically build a detector.
Also, some CDNs (eg, Akamai) will allow customers to visit a URL from remote proxy locations in their network. That might give better coverage, although they are unlikely to be behind a blocking firewall.

How to describe interaction between web server and web client?

At the moment I have the following understanding (which, I assume, is incomplete and probably even wrong).
A web server receives request from a client. The requests are coming to a particular "path" ("address", "URL") and have a particular type (GET, POST and probably something else?). The GET and POST requests can also come with variables and their values (which can be though as a "dictionary" or "associate array"). The parameters of GET requests are set in the address line (for example: http://example.com?x=1&y=2) while parameters of POST requests are set by the client (user) via web forms (in other words, a user fills in a form and press "Submit" button).
In addition to that we have what is called SESSION (also known as COOKIES). This works the following way. When a web-server gets a request (of GET or POST type) it (web server) checks the values of the sent parameters and based on that it generates and sends back to the client HTML code that is displayed in a browser (and is seen by the user). In addition to that the web servers sends some parameters (which again can be imagined as "dictionary" or "associative arrays"). These parameters are saved by the browser somewhere on the client side and when a client sends a new request, he/she also sends back the session parameters received earlier from the web server. In fact server says: you get this from me, memorize it and next time when you speak to me, give it back (so, I can recognize you).
So, what I do not know is if client can see what exactly is in the session (what parameters are there and what values they have) and if client is able to modify the values of these parameters (or add or remove parameters). But what user can do, he/she might decide not to accept any cookies (or session).
There is also something called "local storage" (it is available in HTML5). It works as follows. Like SESSION it is some information sent by web-server to the client and is also memorized (saved) by the client (if client wants to). In contrast to the session it is not sent back b the client to the server. Instead, JavaScripts running on the client side (and send by web-servers as part of the HTML code) can access information from the local storage.
What I am still missing, is how AJAX is working. It is like by clicking something in the browser users sends (via Browser) a request to the web-server and waits for a response. Then the browser receives some response and use it to modify (but not to replace) the page observed by the user. What I am missing is how the browser knows how to use the response from the web-server. Is it written in the HTML code (something like: if this is clicked, send this request to the web server, and use its answer (provided content) to modify this part of the page).
I am going to answer to your questions on AJAX and LocalStorage, also on a very high level, since your definition strike me as such on a high level.
AJAX stands for Asynchronous JavaScript and XML.
Your browser uses an object called XMLHTTPRequest in order to establish an HTTP request with a remote resource.
The client, being a client, is oblivious of what the remote server entails on. All it has to do is provide the request with a URL, a method and optionally the request's payload. The payload is most commonly a parameter or a group of parameters that are received by the remote server.
The request object has several methods and properties, and it also has its ways of handling the response.
What I am missing is how the browser knows how to use the response
from the web-server.
You simply tell it what to do with the reponse. As mentioned above, the request object can also be told what to do with a response. It will listen to a response, and when such arrives, you tell the client what to do with it.
Is it (the response) written in the HTML code?
No. The response is written in whatever the server served it. Most commonly, it's Unicode. A common way to serve a response is a JSON (JavaScript Object Notation) object.
Whatever happens afterwards is a pure matter of implementation.
LocalStorage
There is also something called "local storage" (it is available in
HTML5). It works as follows. Like SESSION it is some information sent
by web-server to the client and is also memorized (saved) by the
client (if client wants to)
Not entirely accurate. Local Storage is indeed a new feature, introduced with HTML5. It is a new way of storing data in the client, and is unique to an origin. By origin, we refer to a unique protocol and a domain.
The life time of a Local Storage object on a client (again, per unique origin), is entirely up to the user. That said, of course a client application can manipulate the data and decide what's inside a local storage object. You are right about the fact that it is stored and can be used in the client through JavaScript.
Example: some web tracking tools want to have some sort of a back up plan, in case the server that collects user data is unreachable for some reason. The web tracker, sometimes introduced as a JavaScript plugin, can write any event to the local storage first, and release it only when the remote server confirmed that it received the event successfully, even if the user closed the browser.
First of all, this is just a simple explanation to clarify your mind. To explain this stuff in more detail we would need to write a book. This been said, I'll go step by step.
Request
A request is a client asking for / sending data to a server.
This request has the following parts:
An URL (Protocol + hostname/IP + path)
A Method (GET, POST, PUT, DELETE, PATCH, and so on)
Some optional parameters (the way they are sent depends on the method)
Some headers (metadata sent to the server)
Some optional cookies
An optional SESSION ID
Some explanations about this:
Cookies can be set by the client or by the server, but they are always stored by the client's browser. Therefore, the browser can decide whether to accept them or not, or to delete or modify them
Session is stored in the server. The server sends the client a session ID to be able to recongnize him in any future request.
Session and cookies are two different things. One is server side, and the other is client side.
AJAX
I'll ommit the meaning of the acronym as you can easily google it.
The great thing about AJAX is the very first A, that stands for asynchronous, what means that the JS engine (in this case built in the browser) won't block until the response gets back.
To understand how AJAX works, you have to know that it's very much alike a common request, but with the difference that it can be triggered without reloading the web page.
The content of the response it whatever you want it to be. From some HTML code, to a JSON string. Even some plain text.
The way the response is treated depends on the implementation and programming. As an example, you could simply alert() the result of an AJAX call, or you and append it to a DOM element.
Local Storage
This doesn't have much to do with anything.
Local storage is just some disk space offered by the browser so you can save data in the browser that persists even if the page or the browser is closed.
An example
Chrome offers a javascript API to manage local storage. It's client side, and you can programmatically access to this storage and make CRUD opperations. It's just like a non-sql non-relational DB in the browser.
I wil summarize your main questions along with a brief answer right below them:
Q1:
Can the client see what exactly is in the session?
A: No. The client only knows the "SessionID", which is meta-data (all other session-data is stored on server only, and client can't see or alter it). The SessionID is used by the server only to identify the client and to map the application process to it's previous state.
HTTP is a stateless protocol, and this classic technique enables it as stateful.
There are very rare cases when the complete session data is stored on client-side (but in such cases, the server should also encrypt the session data so that the client can't see/alter it).
On the other hand, there are web clients that don't have the capability to store cookies at all, or they have features that prevent storing cookie data (e.g. the ability of the user to reject cookies from domains). In such cases, the workaround is to inject the SessionID into URL parameters, by using HTTP redirects.
Q2:
What's the difference between HTML5 LocalStorage and Session?
LocalStorage can be viewed as the client's own 'session' data, or better said a local data store where the client can save/persist data. Only the client (mainly from javascript) can access and alter the data. Think of it as javascript-controlled persistent storage (with the advantage over cookies that you can control what data, it's structure and the format you want to store it). It's also more advantageous than storing data to cookies - which have their own limitations such as data size and structure.
Q3:
How AJAX works?
In very simple words, AJAX means loading on-demand data on top of an already loaded (HTML) page. A typical http request would load the whole data of a page, while an ajax request would load and update just a portion of the (already-loaded) page.
This being said, an AJAX request is very similar to a standard HTTP Request.
Ajax requests are controlled by the javascript code and it can enrich the interaction with the page. You can request specific segments of data and update sections of the page.
Now, if we remember the old days when any interaction with a website (eg. signing in, navigating to other pages etc.) required a complete page reload? Back then, a lot of unnecessary traffic occurred just to perform any simple action. This in turn impacted site responsiveness, user experience, network traffic etc.
This happened due to browsers incapability (at that time) to [a.] perform a parallel HTTP request to the server and [b]render a partial HTML view.
Modern browsers come with these two features that enables AJAX technology - that is, invoking asynchronous(parallel) HTTP Requests (Ajax HTTP Requests) and they also provide on-the-fly DOM alteration mechanism via javascript (real-time HTML Document Object Model manipulation).
Please let me know if you need more info on these topics, or if there's anything else I can help with.
For a more profound understanding, I also recommend this nice web history article as it explains how everything started from when HTML was created and what was it's purpose (to define [at the time] rich documents), and then how HTTP was initially created and what problem it solved (at the time - to "transfer" static HTML). That explains why it is a stateless protocol.
Later on, as HTML and the WEB evolved, other needs emerged (such as the need to interact with the end-user) - and then the Cookie mechanism enhanced the protocol to enable stateful client-server communication by using session cookies. Then Ajax followed. Nowadays, the cookies come with their own limitations too and we have LocalStorage. Did I also mention WebSockets?
1. Establishing a Connection
The most common way web servers and clients communicate is through a connection which follows Transmission Control Protocol, or TCP. Basically, when using TCP, a connection is established between client and server machines through a series of back-and-forth checks. Once the connection is established and open, data can be sent between client and server. This connection can also be termed a Session.
There is also UDP, or User Datagram Protocol which has a slightly different way of communicating and comes with its own set of pros and cons. I believe recently some browsers may have begun to use a combination of the two in order to get the best results.
There is a ton more to be said here, but unless you are going to be writing a browser (or become a hacker) this should not concern you too much beyond the basics.
2. Sending Packets
Once the client-server connection is established, packets of data can be sent between the two. TCP packets contain various bits of information to assist in communication between the two ports. For web programmers, the most important part of the packet will be the section which includes the HTTP request.
HTTP, Hypertext Transfer Protocol is another protocol which describes what the makeup/format of these client-server communications should be.
A most basic example of the relevant portion of a packet sent from a client to a server is as follows:
GET /index.html HTTP/1.1
Host: www.example.com
The first line here is called the Response line. GET describes the method to be used, (others include POST, HEAD, PUT, DELETE, etc.) /index.html describes the resource requested. Finally, HTTP/1.11 describes the protocol being used.
The second line is in this case the only header field in the request, and in this case it is the HOST field which is sort of an alias for the IP address of the server, given by the DNS.
[Since you mentioned it, the difference between a GET request and a POST request is simply that in a GET request the parameters (ex: form data) is included as part of the Response Line, whereas in a POST request the parameters will be included as part of the Message Body (see below).]
3. Receiving Packets
Depending on the request sent to the server, the server will scratch its head, think about what you asked it, and respond accordingly (aka whatever you program it to do).
Here is an example of a response packet send from the server:
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
...
<html>
<head>
<title>A response from a server</title>
</head>
<body>
<h1>Hello World!</h1>
</body>
</html>
The first line here is the Status Line which includes a numerical code along with a brief text description. 200 OK obviously means success. Most people are probably also familiar with 404 Not Found, for example.
The second line is the first of the Response Header Fields. Other fields often added include date, Content-Length, and other useful metadata.
Below the headers and the necessary empty line is finally the (optional) Message Body. Of course this is usually the most exciting part of the response, as it will contain things like HTML for our browsers to display for us, JSON data, or pretty much anything you can code in a return statement.
4. AJAX, Asynchronous JavaScript and XML
Based off all of that, AJAX is fairly simple to understand. In fact, the packets sent and received can look identical to non-ajax requests.
The only difference is how and when the browser decides to send a request packet. Normally, upon page refresh a browser will send a request to the server. However, when issuing an AJAX request, the programmer simply tells the browser to please send a packet to the server NOW as opposed to on page refresh.
However, given the nature of AJAX requests, usually the Message Body won't contain an entire HTML document, but will request smaller, more specific bits of data, such as a query from a database.
Then, your JavaScript which calls the Ajax can also act based off the response. Any JavaScript method is available as making an Ajax call is just another JavaScript function. Thus, you can do things like innerHTML to add/replace content on your page with some HTML returned by the Ajax call. Alternatively though, you could also do something like make an Ajax call which simply should return True or False, and then you could call some JavaScript function with an if else statement. As you can hopefully see, Ajax has nothing to do with HTML per say, it is just a JavaScript function which makes a request from the server and returns the response, whatever it may be.
5. Cookies
HTTP Protocol is an example of a Stateless Protocol. Basically, this means that each pair of Request and Response (like we described) is treated independently of other requests and responses. Thus, the server does not have to keep track of all the thousands of users who are currently demanding attention. Instead, it can just respond to each request individually.
However, sometimes we wish the server would remember us. How annoying would it be if every time I waned to check my Gmail I had to log in all over again because the server forgot about me?
To solve this problem a server can send Cookies to be stored on the client's machine. The server can send a response which tells the client to store a cookie and what exactly it should contain. The client's browser is in charge of storing these cookies on the client's system, thus the location of these cookies will vary depending on your browser and OS. It is important to realize though that these are just small files stored on the client machine which are in fact readable and writable by anyone who knows how to locate and understand them. As you can imagine, this poses a few different potential security threats. One solution is to encrypt the data stored inside these cookies so that a malicious user won't be able to take advantage of the information you made available. (Since your browser is setting these cookies, there is usually a setting within your browser which you can modify to either accept, reject, or perhaps set a new location for cookies.
This way, when the client makes a request from the server, it can include the Cookie within one of the Request Header Fields which will tell the server, "Hey I am an authenticated user, my name is Bob, and I was just in the middle of writing an extremely captivating blog post before my laptop died," or, "I have 3 designer suits picked out in my shopping cart but I am still planning on searching your site tomorrow for a 4th," for example.
6. Local Storage
HTML5 introduced Local Storage as a more secure alternative to Cookies. Unlike cookies, with local storage data is not actually sent to the server. Instead, the browser itself keeps track of State.
This alternative also allows much larger amounts of data to be stored, as there is no requirement for it to be passed across the internet between client and server.
7. Keep Researching
That should cover the basics and give a pretty clear picture as to what is going on between clients and servers. There is more to be said on each of these points, and you can find plenty of information with a simple Google search.

'Soft' move to HTTPS

I have a mobile site that I generally want to always be all HTTPS. It uses the HTML5 application cache to store all the data locally so its a fast user experience EXCEPT during the first page view.
My homepage will load on an iPhone over 4G ATT in 1 second without HTTPS and in 4.5 seconds with HTTPS (round trip time latency is the killer here). So, I am trying to find a way to softly move the user to HTTPS so they have a great first impression, followed by a secure experience.
Here is what I am doing:
Externally, always referencing HTTP (ie press releases, etc)
canonical = HTTP (so Google see HTTP)
On the site pages all links are HTTPS
Once the user comes to the HTTP page the HTTPS pages (all of them) will load via an application cache manifest in an iFrame
Upon hitting the server for the first time (HTTP), the server will set a cookie, and the next time force a redirect to HTTPS (which is already cached)
Is this a good implementation to get the speed of HTTP with the security of HTTPS?

Amazon Cloudfront: private content but maximise local browser caching

For JPEG image delivery in my web app, I am considering using Amazon S3 (or Amazon Cloudfront
if it turns out to be the better option) but have two, possibly opposing,
requirements:
The images are private content; I want to use signed URLs with short expiration times.
The images are large; I want them cached long-term by the users' browser.
The approach I'm thinking is:
User requests www.myserver.com/the_image
Logic on my server determines the user is allowed to view the image. If they are allowed...
Redirect the browser (is HTTP 307 best ?) to a signed Cloudfront URL
Signed Cloudfront URL expires in 60 seconds but its response includes "Cache-Control max-age=31536000, private"
The problem I forsee is that the next time the page loads, the browser will be looking for
www.myserver.com/the_image but its cache will be for the signed Cloudfront URL. My server
will return a different signed Cloudfront URL the second time, due to very short
expiration times, so the browser won't know it can use its cache.
Is there a way round this without having my webserver proxy the image from Cloudfront (which obviously negates all the
benefits of using Cloudfront)?
Wondering if there may be something I could do with etag and HTTP 304 but can't quite join the dots...
To summarize, you have private images you'd like to serve through Amazon Cloudfront via signed urls with a very short expiration. However, while access by a particular url may be time limited, it is desirable that the client serve the image from cache on subsequent requests even after the url expiration.
Regardless of how the client arrives at the cloudfront url (directly or via some server redirect), the client cache of the image will only be associated with the particular url that was used to request the image (and not any other url).
For example, suppose your signed url is the following (expiry timestamp shortened for example purposes):
http://[domain].cloudfront.net/image.jpg?Expires=1000&Signature=[Signature]
If you'd like the client to benefit from caching, you have to send it to the same url. You cannot, for example, direct the client to the following url and expect the client to use a cached response from the first url:
http://[domain].cloudfront.net/image.jpg?Expires=5000&Signature=[Signature]
There are currently no cache control mechanisms to get around this, including ETag, Vary, etc. The nature of client caching on the web is that a resource in cache is associated with a url, and the purpose of the other mechanisms is to help the client determine when its cached version of a resource identified by a particular url is still fresh.
You're therefore stuck in a situation where, to benefit from a cached response, you have to send the client to the same url as the first request. There are potential ways to accomplish this (cookies, local storage, server scripting, etc.), and let's suppose that you have implemented one.
You next have to consider that caching is only just a suggestion and even then it isn't a guarantee. If you expect the client to have the image cached and serve it the original url to benefit from that caching, you run the risk of a cache miss. In the case of a cache miss after the url expiry time, the original url is no longer valid. The client is then left unable to display the image (from the cache or from the provided url).
The behavior you're looking for simply cannot be provided by conventional caching when the expiry time is in the url.
Since the desired behavior cannot be achieved, you might consider your next best options, each of which will require giving up on one aspect of your requirement. In the order I would consider them:
If you give up short expiry times, you could use longer expiry times and rotate urls. For example, you might set the url expiry to midnight and then serve that same url for all requests that day. Your client will benefit from caching for the day, which is likely better than none at all. Obvious disadvantage is that your urls are valid longer.
If you give up content delivery, you could serve the images from a server which checks for access with each request. Clients will be able to cache the resource for as long as you want, which may be better than content delivery depending on the frequency of cache hits. A variation of this is to trade Amazon CloudFront for another provider, since there may be other content delivery networks which support this behavior (although I don't know of any). The loss of the content delivery network may be a disadvantage or may not matter much depending on your specific visitors.
If you give up the simplicity of a single static HTTP request, you could use client side scripting to determine the request(s) that should be made. For example, in javascript you could attempt to retrieve the resource using the original url (to benefit from caching), and if it fails (due to a cache miss and lapsed expiry) request a new url to use for the resource. A variation of this is to use some caching mechanism other than the browser cache, such as local storage. The disadvantage here is increased complexity and compromised ability for the browser to prefetch.
Save a list of user+image+expiration time -> cloudfront links. If a user has an non-expired cloudfront link use it for an image and don't generate a new one.
It seems you already solved the issue. You said that your server is issuing a redirect http 307 to the cloudfront URL (signed URL) so the browser caches only the cloudfront URL not your URL(www.myserver.com/the_image). So the scenario is as follows :
Client 1 checks www.myserver.com/the_image -> is redirect to CloudFront URL -> content is cached
The CloudFront url now expires.
Client 1 checks again www.myserver.com/the_image -> is redirected to the same CloudFront URL-> retrieves the content from cache without to fetch again the cloudfront content.
Client 2 checks www.myserver.com/the_image -> is redirected to CloudFront URL which denies its accesss because the signature expired.

Is a change required only in the code of a web application to support HSTS?

If I want a client to always use a HTTPs connection, do I only need to include the headers in the code of the application or do I also need to make a change on the server? Also how is this different to simply redirecting a user to a HTTPs page make every single time they attempt to use HTTP?
If you just have HTTP -> HTTPS redirects a client might still try to post sensitive data to you (or GET a URL that has sensitive data in it) - this would leave it exposed publicly. If it knew your site was HSTS then it would not even try to hit it via HTTP and so that exposure is eliminated. It's a pretty small win IMO - the bigger risks are the vast # of root CAs that everyone trusts blindly thanks to policies at Microsoft, Mozilla, Opera, and Google.

Resources