Google Analytics uses gif get request why not post request? - http-post

Google Analytics uses Get Request for .gif image to server
http://www.google-analytics.com/__utm.gif?utmwv=4&utmn=769876874&utmhn=example.com&utmcs=ISO-8859-1&utmsr=1280x1024&utmsc=32-bit&utmul=en-us&utmje=...
We can observer that all parameters are sent in this Get Request and the requested image is no where found useful (Its just 1px by 1px Image)
Known Information: If requesting query string is large then Google are going for Post Request.
Now the question is why not Post Request always irrespective of the query string is large or not.
Being data sent via Get Request its leads to security issue. Since, the parameters will be stored in browser history or in web server logs in case of Get Request.
Could someone give any supportive reasons why Google Analytics is depending on both the things?

Because GET requests is what you use for retrieving information that does not alter stuff.
Please note that the use of POST has quite some downsides, the browser usually warns against reloading a resource requested via POST (to prevent double data-entry), POST requests are not cached (which is why some analytics misuse it), proxied etc.
If you want to retrieve a LOT of data using a URL (advice: rethink if there might be a better option), then it's necessary to use post, from Wikipedia:
There are times when HTTP GET is less suitable even for data retrieval. An example of this is when a great deal of data would need to be specified in the URL. Browsers and web servers can have limits on the length of the URL that they will handle without truncation or error. Percent-encoding of reserved characters in URLs and query strings can significantly increase their length, and while Apache HTTP Server can handle up to 4,000 characters in a URL, Microsoft Internet Explorer is limited to 2048 characters in any URL. Equally, HTTP GET should not be used where sensitive information, such as user names and passwords have to be submitted along with other data for the request to complete. In these cases, even if HTTPS is used to encrypt the message body, data in the URL will be passed in clear text and many servers, proxies, and browsers will log the full URL in a way where it might be visible to third parties. In these cases, HTTP POST should be used.

A POST request would require an ajax call and it wouldn't work because of http://en.wikipedia.org/wiki/Same-origin_policy. But images can easily be cross-site, so they just need to add an img tag to the DOM with the required url and the browser will load it, sending the needed information to their servers for tracking.

Related

Is SSL required for a website if the data is actually passed securely to the server?

I have a fairly basic query that I hope somebody can help with.
A website www.example.com presents a form to the user which collects a bunch of personal data. e.g. name, email, telephone.
The full URL is http://www.example.com/
On submit the form's data is collected via JavaScript, POSTed via an AJAX call to www.example.com/process.php and passed to the server via a secure API call using a curl request.
The full url provided in the curl request is https://api.mysecuresite.com/
Do I need to provide an SSL certificate for www.example.com?
Yes.
The data could be intercepted between the browser and http://www.example.com/process.php.
(Consider an analogy: You take a large amount of cash out of the bank, hold it up in the air and walk two blocks down the street. Then you put the cash in an armored van. Is this is safe way to handle the cash?)
Also: A man-in-the-middle attack could inject a script into http://www.example.com before it gets put into the HTTP request to http://www.example.com/process.php. This could also intercept the data even if you were making the Ajax request directly to https://api.mysecuresite.com/.
Additionally, users will be informed that http://www.example.com is insecure, which will (correctly) discourage them from trusting any assurances you make that their data is safe.

How to validate that a certain domain is reachable from browser?

Our single page app embeds videos from Youtube for the end-users consumption. Everything works great if the user does have access to the Youtube domain and to the content of that domain's pages.
We however frequently run into users whose access to Youtube is blocked by a web filter box on their network, such as https://us.smoothwall.com/web-filtering/ . The challenge here is that the filter doesn't actually kill the request, it simply returns another page instead with a HTTP status 200. The page usually says something along the lines of "hey, sorry, this content is blocked".
One option is to try to fetch https://www.youtube.com/favicon.ico to prove that the domain is reachable. The issue is that these filters usually involve a custom SSL certificate to allow them to inspect the HTTP content (see: https://us.smoothwall.com/ssl-filtering-white-paper/), so I can't rely TLS catching the content being swapped for me with the incorrect certificate, and I will instead receive a perfectly valid favicon.ico file, except from a different site. There's also the whole CORS issue of issuing an XHR from our domain against youtube.com's domain, which means if I want to get that favicon.ico I have to do it JSONP-style. However even by using a plain old <img> I can't test the contents of the image because of CORS, see Get image data in JavaScript? , so I'm stuck with that approach.
Are there any proven and reliable ways of dealing with this situation and testing browser-level reachability towards a specific domain?
Cheers.
In general, web proxies that want to play nicely typically annotate the HTTP conversation with additional response headers that can be detected.
So one approach to building a man-in-the-middle detector may be to inspect those response headers and compare the results from when behind the MITM, and when not.
Many public websites will display the headers for a arbitrary request; redbot is one.
So perhaps you could ask the party whose content is being modified to visit a url like: youtube favicon via redbot.
Once you gather enough samples, you could heuristically build a detector.
Also, some CDNs (eg, Akamai) will allow customers to visit a URL from remote proxy locations in their network. That might give better coverage, although they are unlikely to be behind a blocking firewall.

How to describe interaction between web server and web client?

At the moment I have the following understanding (which, I assume, is incomplete and probably even wrong).
A web server receives request from a client. The requests are coming to a particular "path" ("address", "URL") and have a particular type (GET, POST and probably something else?). The GET and POST requests can also come with variables and their values (which can be though as a "dictionary" or "associate array"). The parameters of GET requests are set in the address line (for example: http://example.com?x=1&y=2) while parameters of POST requests are set by the client (user) via web forms (in other words, a user fills in a form and press "Submit" button).
In addition to that we have what is called SESSION (also known as COOKIES). This works the following way. When a web-server gets a request (of GET or POST type) it (web server) checks the values of the sent parameters and based on that it generates and sends back to the client HTML code that is displayed in a browser (and is seen by the user). In addition to that the web servers sends some parameters (which again can be imagined as "dictionary" or "associative arrays"). These parameters are saved by the browser somewhere on the client side and when a client sends a new request, he/she also sends back the session parameters received earlier from the web server. In fact server says: you get this from me, memorize it and next time when you speak to me, give it back (so, I can recognize you).
So, what I do not know is if client can see what exactly is in the session (what parameters are there and what values they have) and if client is able to modify the values of these parameters (or add or remove parameters). But what user can do, he/she might decide not to accept any cookies (or session).
There is also something called "local storage" (it is available in HTML5). It works as follows. Like SESSION it is some information sent by web-server to the client and is also memorized (saved) by the client (if client wants to). In contrast to the session it is not sent back b the client to the server. Instead, JavaScripts running on the client side (and send by web-servers as part of the HTML code) can access information from the local storage.
What I am still missing, is how AJAX is working. It is like by clicking something in the browser users sends (via Browser) a request to the web-server and waits for a response. Then the browser receives some response and use it to modify (but not to replace) the page observed by the user. What I am missing is how the browser knows how to use the response from the web-server. Is it written in the HTML code (something like: if this is clicked, send this request to the web server, and use its answer (provided content) to modify this part of the page).
I am going to answer to your questions on AJAX and LocalStorage, also on a very high level, since your definition strike me as such on a high level.
AJAX stands for Asynchronous JavaScript and XML.
Your browser uses an object called XMLHTTPRequest in order to establish an HTTP request with a remote resource.
The client, being a client, is oblivious of what the remote server entails on. All it has to do is provide the request with a URL, a method and optionally the request's payload. The payload is most commonly a parameter or a group of parameters that are received by the remote server.
The request object has several methods and properties, and it also has its ways of handling the response.
What I am missing is how the browser knows how to use the response
from the web-server.
You simply tell it what to do with the reponse. As mentioned above, the request object can also be told what to do with a response. It will listen to a response, and when such arrives, you tell the client what to do with it.
Is it (the response) written in the HTML code?
No. The response is written in whatever the server served it. Most commonly, it's Unicode. A common way to serve a response is a JSON (JavaScript Object Notation) object.
Whatever happens afterwards is a pure matter of implementation.
LocalStorage
There is also something called "local storage" (it is available in
HTML5). It works as follows. Like SESSION it is some information sent
by web-server to the client and is also memorized (saved) by the
client (if client wants to)
Not entirely accurate. Local Storage is indeed a new feature, introduced with HTML5. It is a new way of storing data in the client, and is unique to an origin. By origin, we refer to a unique protocol and a domain.
The life time of a Local Storage object on a client (again, per unique origin), is entirely up to the user. That said, of course a client application can manipulate the data and decide what's inside a local storage object. You are right about the fact that it is stored and can be used in the client through JavaScript.
Example: some web tracking tools want to have some sort of a back up plan, in case the server that collects user data is unreachable for some reason. The web tracker, sometimes introduced as a JavaScript plugin, can write any event to the local storage first, and release it only when the remote server confirmed that it received the event successfully, even if the user closed the browser.
First of all, this is just a simple explanation to clarify your mind. To explain this stuff in more detail we would need to write a book. This been said, I'll go step by step.
Request
A request is a client asking for / sending data to a server.
This request has the following parts:
An URL (Protocol + hostname/IP + path)
A Method (GET, POST, PUT, DELETE, PATCH, and so on)
Some optional parameters (the way they are sent depends on the method)
Some headers (metadata sent to the server)
Some optional cookies
An optional SESSION ID
Some explanations about this:
Cookies can be set by the client or by the server, but they are always stored by the client's browser. Therefore, the browser can decide whether to accept them or not, or to delete or modify them
Session is stored in the server. The server sends the client a session ID to be able to recongnize him in any future request.
Session and cookies are two different things. One is server side, and the other is client side.
AJAX
I'll ommit the meaning of the acronym as you can easily google it.
The great thing about AJAX is the very first A, that stands for asynchronous, what means that the JS engine (in this case built in the browser) won't block until the response gets back.
To understand how AJAX works, you have to know that it's very much alike a common request, but with the difference that it can be triggered without reloading the web page.
The content of the response it whatever you want it to be. From some HTML code, to a JSON string. Even some plain text.
The way the response is treated depends on the implementation and programming. As an example, you could simply alert() the result of an AJAX call, or you and append it to a DOM element.
Local Storage
This doesn't have much to do with anything.
Local storage is just some disk space offered by the browser so you can save data in the browser that persists even if the page or the browser is closed.
An example
Chrome offers a javascript API to manage local storage. It's client side, and you can programmatically access to this storage and make CRUD opperations. It's just like a non-sql non-relational DB in the browser.
I wil summarize your main questions along with a brief answer right below them:
Q1:
Can the client see what exactly is in the session?
A: No. The client only knows the "SessionID", which is meta-data (all other session-data is stored on server only, and client can't see or alter it). The SessionID is used by the server only to identify the client and to map the application process to it's previous state.
HTTP is a stateless protocol, and this classic technique enables it as stateful.
There are very rare cases when the complete session data is stored on client-side (but in such cases, the server should also encrypt the session data so that the client can't see/alter it).
On the other hand, there are web clients that don't have the capability to store cookies at all, or they have features that prevent storing cookie data (e.g. the ability of the user to reject cookies from domains). In such cases, the workaround is to inject the SessionID into URL parameters, by using HTTP redirects.
Q2:
What's the difference between HTML5 LocalStorage and Session?
LocalStorage can be viewed as the client's own 'session' data, or better said a local data store where the client can save/persist data. Only the client (mainly from javascript) can access and alter the data. Think of it as javascript-controlled persistent storage (with the advantage over cookies that you can control what data, it's structure and the format you want to store it). It's also more advantageous than storing data to cookies - which have their own limitations such as data size and structure.
Q3:
How AJAX works?
In very simple words, AJAX means loading on-demand data on top of an already loaded (HTML) page. A typical http request would load the whole data of a page, while an ajax request would load and update just a portion of the (already-loaded) page.
This being said, an AJAX request is very similar to a standard HTTP Request.
Ajax requests are controlled by the javascript code and it can enrich the interaction with the page. You can request specific segments of data and update sections of the page.
Now, if we remember the old days when any interaction with a website (eg. signing in, navigating to other pages etc.) required a complete page reload? Back then, a lot of unnecessary traffic occurred just to perform any simple action. This in turn impacted site responsiveness, user experience, network traffic etc.
This happened due to browsers incapability (at that time) to [a.] perform a parallel HTTP request to the server and [b]render a partial HTML view.
Modern browsers come with these two features that enables AJAX technology - that is, invoking asynchronous(parallel) HTTP Requests (Ajax HTTP Requests) and they also provide on-the-fly DOM alteration mechanism via javascript (real-time HTML Document Object Model manipulation).
Please let me know if you need more info on these topics, or if there's anything else I can help with.
For a more profound understanding, I also recommend this nice web history article as it explains how everything started from when HTML was created and what was it's purpose (to define [at the time] rich documents), and then how HTTP was initially created and what problem it solved (at the time - to "transfer" static HTML). That explains why it is a stateless protocol.
Later on, as HTML and the WEB evolved, other needs emerged (such as the need to interact with the end-user) - and then the Cookie mechanism enhanced the protocol to enable stateful client-server communication by using session cookies. Then Ajax followed. Nowadays, the cookies come with their own limitations too and we have LocalStorage. Did I also mention WebSockets?
1. Establishing a Connection
The most common way web servers and clients communicate is through a connection which follows Transmission Control Protocol, or TCP. Basically, when using TCP, a connection is established between client and server machines through a series of back-and-forth checks. Once the connection is established and open, data can be sent between client and server. This connection can also be termed a Session.
There is also UDP, or User Datagram Protocol which has a slightly different way of communicating and comes with its own set of pros and cons. I believe recently some browsers may have begun to use a combination of the two in order to get the best results.
There is a ton more to be said here, but unless you are going to be writing a browser (or become a hacker) this should not concern you too much beyond the basics.
2. Sending Packets
Once the client-server connection is established, packets of data can be sent between the two. TCP packets contain various bits of information to assist in communication between the two ports. For web programmers, the most important part of the packet will be the section which includes the HTTP request.
HTTP, Hypertext Transfer Protocol is another protocol which describes what the makeup/format of these client-server communications should be.
A most basic example of the relevant portion of a packet sent from a client to a server is as follows:
GET /index.html HTTP/1.1
Host: www.example.com
The first line here is called the Response line. GET describes the method to be used, (others include POST, HEAD, PUT, DELETE, etc.) /index.html describes the resource requested. Finally, HTTP/1.11 describes the protocol being used.
The second line is in this case the only header field in the request, and in this case it is the HOST field which is sort of an alias for the IP address of the server, given by the DNS.
[Since you mentioned it, the difference between a GET request and a POST request is simply that in a GET request the parameters (ex: form data) is included as part of the Response Line, whereas in a POST request the parameters will be included as part of the Message Body (see below).]
3. Receiving Packets
Depending on the request sent to the server, the server will scratch its head, think about what you asked it, and respond accordingly (aka whatever you program it to do).
Here is an example of a response packet send from the server:
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
...
<html>
<head>
<title>A response from a server</title>
</head>
<body>
<h1>Hello World!</h1>
</body>
</html>
The first line here is the Status Line which includes a numerical code along with a brief text description. 200 OK obviously means success. Most people are probably also familiar with 404 Not Found, for example.
The second line is the first of the Response Header Fields. Other fields often added include date, Content-Length, and other useful metadata.
Below the headers and the necessary empty line is finally the (optional) Message Body. Of course this is usually the most exciting part of the response, as it will contain things like HTML for our browsers to display for us, JSON data, or pretty much anything you can code in a return statement.
4. AJAX, Asynchronous JavaScript and XML
Based off all of that, AJAX is fairly simple to understand. In fact, the packets sent and received can look identical to non-ajax requests.
The only difference is how and when the browser decides to send a request packet. Normally, upon page refresh a browser will send a request to the server. However, when issuing an AJAX request, the programmer simply tells the browser to please send a packet to the server NOW as opposed to on page refresh.
However, given the nature of AJAX requests, usually the Message Body won't contain an entire HTML document, but will request smaller, more specific bits of data, such as a query from a database.
Then, your JavaScript which calls the Ajax can also act based off the response. Any JavaScript method is available as making an Ajax call is just another JavaScript function. Thus, you can do things like innerHTML to add/replace content on your page with some HTML returned by the Ajax call. Alternatively though, you could also do something like make an Ajax call which simply should return True or False, and then you could call some JavaScript function with an if else statement. As you can hopefully see, Ajax has nothing to do with HTML per say, it is just a JavaScript function which makes a request from the server and returns the response, whatever it may be.
5. Cookies
HTTP Protocol is an example of a Stateless Protocol. Basically, this means that each pair of Request and Response (like we described) is treated independently of other requests and responses. Thus, the server does not have to keep track of all the thousands of users who are currently demanding attention. Instead, it can just respond to each request individually.
However, sometimes we wish the server would remember us. How annoying would it be if every time I waned to check my Gmail I had to log in all over again because the server forgot about me?
To solve this problem a server can send Cookies to be stored on the client's machine. The server can send a response which tells the client to store a cookie and what exactly it should contain. The client's browser is in charge of storing these cookies on the client's system, thus the location of these cookies will vary depending on your browser and OS. It is important to realize though that these are just small files stored on the client machine which are in fact readable and writable by anyone who knows how to locate and understand them. As you can imagine, this poses a few different potential security threats. One solution is to encrypt the data stored inside these cookies so that a malicious user won't be able to take advantage of the information you made available. (Since your browser is setting these cookies, there is usually a setting within your browser which you can modify to either accept, reject, or perhaps set a new location for cookies.
This way, when the client makes a request from the server, it can include the Cookie within one of the Request Header Fields which will tell the server, "Hey I am an authenticated user, my name is Bob, and I was just in the middle of writing an extremely captivating blog post before my laptop died," or, "I have 3 designer suits picked out in my shopping cart but I am still planning on searching your site tomorrow for a 4th," for example.
6. Local Storage
HTML5 introduced Local Storage as a more secure alternative to Cookies. Unlike cookies, with local storage data is not actually sent to the server. Instead, the browser itself keeps track of State.
This alternative also allows much larger amounts of data to be stored, as there is no requirement for it to be passed across the internet between client and server.
7. Keep Researching
That should cover the basics and give a pretty clear picture as to what is going on between clients and servers. There is more to be said on each of these points, and you can find plenty of information with a simple Google search.

How does ajax form submission work?

I know how to use ajax for submitting a form and all. What I am concerned about is, what is actually happening in the background when a form is submitted via ajax.
How are the values transferred? Encrypted or not? And what is the
need of specifying submission type, I mean get or post, if the URL is
not showing the form fields?
Edit: Found this on w3schools:
GET requests can be cached
GET requests remain in the browser history
GET requests can be bookmarked
GET requests should never be used when dealing with sensitive data
GET requests have length restrictions
GET requests should be used only to retrieve data
POST requests are never cached
POST requests do not remain in the browser history
POST requests cannot be bookmarked
POST requests have no restrictions on data length
How do these apply to ajax form submission?
Basically, when you Ajax-submit a form, it is doing exact same thing as what would happen when you as a user GET or POST submit a form - except that it is done in an asynchronous thread by the browser - i.e. called XMLHttpRequest.
If you submit form as a GET request, all of the form values are stitched together as parameter strings and appended to the URL (form's ACTION URL) - prefixed by a ?. This means anyone who can intercept that communication can read the submitted form data even if request is sent to a HTTPS URL. The POST method sends form data as a separate block (from the URL) and if URL is HTTPS then form data gets encrypted.
It looks like you are just starting out in the world of web development - welcome to the world of programming. I would recommend reading up on some good web development/programming books (I don't want to promote any particular book here). Amazon may help suggest few good ones under "Web Development" kind of search terms.
Also, I suggest that you read up a little on GET vs. POST by googling for it (I can only include one or two links - google will show you hundreds).
For the clear understanding & behind the scene things please refer the links given below.
http://www.jabet.com/
How does AJAX work?
Actually ajax request is same as the normal requests at the server end.
GET or POST has their own use cases. for example: GET has a limit of data transfer depending on the browsers from 1KB to 10 KB. where POST has no such limits.
For a server both AJAX & normal request both are same. so it depends on server code which method you wish to support.
ajax requests are NOT encrypted.
http://www.w3schools.com/tags/ref_httpmethods.asp
It looks like you want a very detailed answer so you can find it yourself:
Google it and read thoroughly the pages (wikipedia for example)
Read http://www.w3.org/TR/XMLHttpRequest/
Inspect the packets between your browser and the server

Does AJAX have any special security concerns?

I know all about SQL injections, and peeking into javascript files that a website uses, and also that GET requests contain all of the information in a URL.
Is there any security concern that is special to AJAX and only pertains to using AJAX?
For example, sending post requests via AJAX seems completely safe to me. Barring SQL injections, I can't think of one thing that could go wrong... is this the correct case?
Also, are "requests" of any kind that a user's browser sends or any information it receives available to be viewed by a third party who should not be viewing? And can that happen to AJAX post requests ('post' requests specifically; not 'get')?
It's like any other form of data input: validate your values, check the referrer, authenticate the session, use SSL.

Resources