Why do update streams require the user to manually 'load more content'? - ajax

Looking at a lot of web applications (websites/services/whatever) that have a 'streaming' component (typically this is a 'Social' app): Think: Facebook's 'Wall', Twitter 'Feed', LinkedIn's 'News Feed'.
They have a pretty similar characteristic: 'A notice of new items is added to the page (automatically assuming via a background Ajax call', but the new HTML representing the newest feed items isn't loaded to the page until the users click this update link.'
I guess I'm curious if this design decision is for any of the following reasons and if so: could anyone whom has worked on one of these types of apps explain the reasoning they found for doing it this way:
User experience (updates for a large number of 'Facebook Friends' or
'Pages' or 'Tweets' would move too quickly for one to absorb and
read with any real intent, so the page isn't refresh automatically.
Client-side performance: fetching a simple 'count' of updates
requires less bandwidth (less loadtime), less JS running to update
the page for anyone whom has the site open, and thus a lighter
weight feel on the client-side.
Server-side performance: Fewer requests coming into the server to
gather more information about recent updates (less outgoing
bandwidth, more free cycles to be grabbing information for those
whom do request it (by clicking the link). While I'm sure the owners
of these websites aren't 'short on resources', if everyone whom had
Twitter or Facebook open in the browser got a full-update fetched
from the server every-time one was created I'm sure it would be a
much more sig. drag on resources.
They are actually trying to save resources (it takes a cup of coffee
to perform a Google search (haha)) and sending a few bytes of data
to the page representing the count of new updates is a lot lighter
of a load on applications that are being used simultaneously on
hundreds and thousands of browser windows (not to mention API
requests).
I have a few more questions depending on the answer to this first question as well...so I'll probably add those here or ask another question!!
Thanks
P.S. This question got trolled off of the 'Web Applications' site -- so I brought my questions here where they're not to 'broad' or 'off-topic' (-8

Until the recent UI changes to Facebook, they did auto-load new content. It was extremely frustrating from a user perspective, as you'd be reading through the list of your friend's posts and all of a sudden everything would shift and you'd have no idea where the post you were just reading went.
I'd imagine this is the main reason.

Related

How to read content from reCAPTCHA protected site

My client needs data scraped from a website. I am planning to use php_curl. The problem is, the site is using Google reCAPTCHA. Few powerful data items are visible only when you click "show this information link". then the reCAPTCHA appears in lightbox and vanishes, and information is displayed.
I have checked the source html, the protected item is actually loaded when someone clicks, and there is no way for me to automate this click. I have even tried to open the site in iframe and then use JS to click it, but it fails as both domains are different. I have also tried to use Selenium stand alone version but its downloads are corrupt.
Unless there is a design flaw with the website, the reCAPTCHA will prevent you from scraping the material without human intervention.
Technically, your best bet is to employ humans to solve CAPTCHAs all day and write some software to automatically scrape the material it protects for each one they solve. A number of viable businesses have been created this way, where the data is valuable and there is a genuine public interest in opening the data-set. (For example I heard that flight companies use CAPTCHA devices to prevent price comparison sites from driving down the cost to the consumer, and I'd argue in such a case there is an overwhelming public interest to defeating such defences).
Morally, however, you would need to tell us what you are doing in order for us to advise you. It is possible your client is merely planning to steal other people's material and then attempt to monetise it for him/herself, even though they had no hand in creating it. That may breach some copyright laws, but moreover, they (and you) need to decide if the scraping is fair.
I am facing the same problem but resolved it using clear my cookies in httprequest in useragent after clear cookie wait time function (tread sleep) for some time and then start scrapping again. But I am doing this in C#, not in PHP. Applying this logic may help you.

Popup banner on a page to announce when other people are looking at the same page

New customers of company X must be investigated to find out if they are legitimate. A web page lists all the customers that have not yet been investigated, each one hotlinked to a page with details about that customer. It often happens that two investigators visit the details page for the same new customer at the same time and investigate the same customer independently. This is a waste of time.
Several investigators have suggested that a dynamic HTML banner could pop up on the customer detail page whenever two or more investigators are looking at the same detail page. If investigator X is looking at a certain page, and then investigator Y navigates to the same page, Y would see a banner at the top of the page warning them that X was already looking at this page, and a similar banner would pop up on X's page to warn them that Y had started looking at it. This seems like a reasonable idea. (The ZenDesk ticket management system uses a similar popup to help prevent two customer service agents from trying to service the same ticket at the same time.)
Before I go and implement a lot of stuff, is there anything that does something like that that I could use right out of the can? Or is there anything that handles just the front-end parts of this, to which I could attach a homegrown backend?
Since I didn't receive an answer, I went ahead with plan B, which was to go ahead and implement "a lot of stuff". This is now available at https://github.com/ZipRecruiter/2banner under a free license.
The stuff I implemented consists of three components:
A backend database API for storing information about which users looked at which pages and when
A middle API server that can answer queries about who is looking at each page
A JavaScript thingy for the web page that sends of an AJAX request to the API server and then pops up a banner if the response indicates that someone else is looking at the same page
The backend uses Perl's DBIx::Class library. The API server is designed as a plugin component for Perl's Catalyst framework. The front-end uses jQuery. All three of these components are more or less independent, and any of them could be replaced with something else, so it is my hope that people will be able to use some of this even if they can't use the whole thing.
The package is called 2banner. The source code, with detailed installation and usage instructions, is available for free distribution under the BSD three-clause license, courtesy of my employer, ZipRecruiter.
https://github.com/ZipRecruiter/2banner
Share and enjoy.

Implement real-time updating notification feature

I'd like to implement some visual indicator of various sections for the items whose status is pending in my app similar to facebook's / google plus unread notification indicator...I have written an API to fetch the count to be displayed, but I am stuck at updating it everytime an item gets added or deleted, I could think of two approaches which I am not satisfied with, first one being making an API call related to the count whenever a POST or DELETE operation is performedSecond one being refreshing the page after some time span...
I think there should be much better way of doing this from server side, any suggestion or any gem to do so?
Even in gmail it is refreshed on client request. The server calculates the amount of new items, and the client initiates a request (probably with AJAX). This requires an almost negligible amount of data and process time, so probably you can get away with it. Various cache gems even can store the part of the page refreshed if no data changed since last request, which even solves the problem of calculating only when something changed.
UPDATE:
You can solve the problem basically two ways: server side push, and a client side query. The push is problematic, for various reasons, rarely used in web environment, at least as far as I know. Most of the pages (if not all) uses timed query to refresh such information. You can check it with the right tool, like firebug for firefox. You can see as individual requests initiated towards the server.
When you fire a request trough AJAX, the server replies you. Normally it generates a page fragment to replace the old content with the new, but some cache mechanism can intervene, and if nothing changed, you may get the previously stored cache fragment. See some tutorial here, for various gems, one of them may fit your needs.
If you would prefer a complete solution, check Faye (tutorial here). I haven't used it, but may worth a try, seems simple enough.

Pros and Cons of an all Ajax Site? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
So I actually saw a full ajax site somewhere (I forget where) and thought it would be something new and fun to try. I used an old site I had built and put it on a new server. With a little bit of jquery and ajax, I was able to make the entire site work on one page load.
My question is, what are some pros and (more likely) cons to this method?
Please note - the site works through a semi clever linking function. Everything works perfectly fine if the user doesn't have javascript enabled, the newly requested page loads like it would on any other website.
More detail -- Say the user loads the homepage of the site, then logs in. When they log in, the login box fades and reappears with user info. Other content on the page loads as necessary upon logging in. If they click a link, lets say "Articles", one column on the homepage slides up and slides back down with the articles. If they click home the articles slide up and the homepage content slides back down. Things like posting comments, viewing profiles, voting on things, etc. are all done through ajax.
Is this a bad method of web design? If so, why?
I am open to all answers/opinions.
IMO, this isn't "bad" or "good". That depends completely on whether or not the website fulfills the requirements. Oftentimes, developers working on AJAX-only sites tend to miss the whole negative SEO impact issue. However, if the site is developed to support progressive enhancement (or graceful degradation depending on your point of view), which it sounds like you have, then you're good. Only things to prepare for are times when the AJAX call can't complete as expected (make sure you're dealing with timeouts, broken links, etc) so the user doesn't get stuck staring at a loading icon. (The kind of stuff you'd have to deal with in any application, really.)
There are plenty of single-page websites out there using heavy JS and AJAX for the UI and they are great. Specifically, I know of portfolio sites for web designers and web app development teams that use this approach. Oftentimes, the app feels a bit like a flash app, but without the need for a special plugin.
"Is this a bad method of web design? If so, why?"
Certainly not. In fact, making web-pages behave more like desktop applications, whilst remaining functional to ALL users, is the holy-grail of web-design.
I say, as long as you consider ALL your users, i.e. mobile/text-only/low bandwidth/small screensizes then you will be fine. Too many developers just do it for their huge 19" screens and 10Mbps, that users to get left behind through almost no fault of their own.
It depends on the user
This relates closely to UX, IMHO, though of course it's on-topic for programming solutions.
All-AJAX is often called "managing state" 12 years after this Question was asked.
From my experience in:
Creating a platform for API plugins
Creating two of my own CMS web apps for different purposes
Managing many different WordPress.org sites for different purposes
Managing my own cloud servers for both PHP-AJAX and Node.js doing these calls
...it depends on what is most efficient for users.
Consider these scenarios:
Will users be clicking around this website all day long or for at least an hour adjusting many different options and <form> inputs?
Or will many users visit briefly to perform just a handful of quick tasks?
State-managed / all-AJAX is by far best for scenario 1, with Facebook and Gmail as prime examples.
Whole-page loads are more efficient for scenario 2, like blogs, especially with pages linked directly from search results. That might apply to webstores like Amazon, maybe, where users search Google to find one or two products, then leave.
Philosophically, I've heard that the difference is about the number of users and traffic, but I don't quite agree. It's more about how much clicking and <form> sending the primary target user will be doing.

How to know quantity of users with turned off images in browser?

I'm working on the quite popular website, which looks good if user has turned on "Load images" option in his browser's settings.
When you try to open the website with "turned off images" option, it becomes not usable, many components won't work, because user won't see "important" buttons(we don't use standard OS buttons).
So, we can't understand and measure negative business impact of this mistake(absent alt/title attributes).
We can't set priority for this task - because we don't know how much such users comes to our website.
Please give me some advice how this problem can be solved?
Look in the logs for how many hits you get on a page without the subsequent requests from the browser for the other images.
Of course the browser might have images cached, so look for the first time you get a hit.
You can even use IP address for this, since it's OK if you throw out good data (that is, hits that are new that you disregard). The question is just: Of the hits you know are first-time, how many don't get images?
If this is a public page (i.e. not a web application that you've logged in to), also disregard search engine bots to the greatest extent possible; they usually won't retrieve images.

Resources