Scenario
A client application that features a map. The map is an interactive control that upon move/zoom will request tiles (as needed) from a tile server -- a GeoServer in this case. The tile server receives requests for tiles as the user moves around the interactive map. Let us imagine that there is no limit on how many requests the client can make.
Problem
When a user is moving fast, requests to the tile server pile up. The tile server gets bogged down and is not able to provide tiles in a timely fashion.
Additionally, the tile request queue is responded to in request order. So, the user could go from Florida to California and have to wait for Florida tiles to load before seeing any tiles in California.
Questions
How can we improve the perceived performance of the client?
What are some strategies to employ on the client-side to prevent a large amount of requests when panning fast? Zooming fast?
What are some strategies to employ on the server-side to determine if a request is no longer needed or should take lower priority?
Possible Solution
Place a custom proxy in front of the Tile Server such that tiles could be requested with a time-stamp -- later tiles always receiving priority. The proxy could also implement a feature allowing the client application to abandon a request.
Thank you in advance.
I looked at various solutions on stopping image loading. But it seems no one has found a way to stop loading an image when browser has already started downloading it.
So, I can recommend alternative. Browser has a limit of 2 parallel downloads per domain. So, that means you need to load images of different locations from different subdomain so that browser can load multiple locations in parallel.
So, say california images are coming from:
california.yourwebsite.com/images
And florida images from:
florida.yourwebsite.com/images
Moreover, you can use different subdomain for different zoom level so that if images for a certain zoom level are still loading and user chages zoom level, browser can download the new zoom level images immediately.
zoom10.florida.yourwebsite.com/images
For this, you need to create a *.yourwebsite.com DNS mapping to your webserver(s) from your domain panel.
Does this answer your question?
Related
Sorry to bother you with a probably easy question.
We would like to set up an WMS (web map service) that forwards all the requests to three other WMS depending on the scale (zoom intervall).
The aim is to combine 3 background maps (which show the same geografic area in different levels of detail and come in 3 seperate WMS) into one WMS so the users can zoom in and out seamlessly using just that one WMS that the forwards the WMS-requests to the other 3 WMS.
Is that possible? Is there a software can help me to solve my problem?
Thank you in advance!!
The functionality you are looking for is called cascading. It is possible to do this using GeoServer.
The GeoServer manual explains in detail how you can set up a cascaded WMS layer. However, there is not currently any way to apply a scale limit to a cascaded layer unless the source server applies it.
Another project that supports cascading WMS is MapProxy but I'm not sure if it can change sources depending on the scale of the request. It does have the option to limit a request to a min/max scale but it returns a blank image outside those limits.
I don't fully understand how responsive images work. so if it doesn't work like I think it does then you can go on and ignore this question. If not here goes.
To ask this question let me paint a picture where a developer uses the workbox precache to cache a series of images in the service worker and since the developer wants to make his webapp responsive. He has to make sure that every image comes in different sizes some large, some medium and others small and all these images he would have to cache since its easy(You just use the glob pattern). but here is the problem. Lets say a user using a smartphone would visit the app and end up using only the medium sized image. so that means that the large image and the small one are useless to that user and he just wasted some important bandwidths caching them. And this would apply to all the other images in the app that are responsive. So the question is if there is a way(an that is a very big if) for workbox to use media queries to know the images to cache for the user based on the user's device. Maybe delay the installation of the service worker for a few milliseconds in order to verify at what dimension the user's device displays the images and so caches those images which the user will use and then discard the rest.
if you were watching the State of the Union Address (http://www.whitehouse.gov/state-of-the-union-2013) you would have seen graphic supplements that appeared alongside of the video stream of the President that served to illustrate his key points.
The video on the site is a composite of this, but during the live streaming these were handled separately.
My question is: what is the best approach for doing this? especially if one wanted very tight control of the appearance of the graphics (i.e. right when the point is made, not before and not long after).
I'm wondering if any tools exist to facilitate this? I've been scouring google, but I don't think that I have the correct technical vocabulary for what I'm describing because I'm coming up blank.
I imagine AJAX would be a good starting point, but I'm not sure how to achieve the level of control that they had, or how to handle the back end of things.
For anyone who might encounter this challenge we devised two ways to solve it:
The first is a bit mickey mouse: It requires that you know how many images, etc you want to use beforehand (which in most cases you would). We wrote a script to repeatedly request an image and inserts it into the page, and on finding an image then request the next image in the chain.
Ie. Display default image -> request image 1
then, displaying image 1 -> request image 2
etc
From your end you can simply drop the images into a folder on your server when you are ready for them to go in. An advantage of this is that the images can be interactive, with links to other content, etc.
The big disadvantage, of course, is a lot of unnecessary requests to your page. In our case we anticipated enough traffic that it didn't seem wise. Also, there are plenty of opportunities for mistakes and depending how frequently your timer fires there are likely to be timing discrepancies.
The Second costs money: we found the program Ustream (http://www.ustream.tv/producer) which allows us all the image control we require in terms of timing with the advantage of providing support for media clips etc. And it allows you to record everything streamed.
The disadvantage is that what the user sees is an integrated video on your site, so that you have to handle links to related content and provide images (if you want your users to have access to them) separately.
Hope this comes in handy for someone
I would still welcome any suggestions on how to make the first method more effective
I am wondering if some of you are aware of the architectural approaches taken by the Wave team to build its GWT web client? Since i am trying to optimize performance of one GWT app designed for mobiles, it is hard not to admire its speedy credentials :)
Is Wave not using GWT-RPC to get regular updates from server? Firefox tracks some JSON communication going over the wire but nothing like RPC stuff.
How do they proceed when, for instance a new wavelet is sent. Is there a view object for every wave DTO, or they use some other pattern?
How is GUI updated after a response with, say, a new Wave arrives. Would the whole area with wavelets being rerendered or the use some smart techniques to ensure that only particular element is touched?
Thanks
This is probably information overload, but since Google Wave is open source you can actually look at how they set things up here.
If you look at WaveView.java, for example, you can see that they are using a client-side event bus like Ray Ryan mentioned in this talk at Google IO 2009. I seem to remember seeing another video where they talked about these aspects of Google Wave:
They use an event system to fire off events when something happens on the client side. The event system manages communication with the server, passing event information up to the server, getting events back from the server, and publishing those events that come back. The event bus uses a kind of buffer so that if a bunch of events are fired off in rapid succession, they can send them all in one batch. For example, when a new Wave arrives, an event with the wave information would get fired, and any portions of the UI that are actively listening for that event would be notified, so that they could determine whether they needed to change themselves accordingly.
They used seam points (or some such; I can't remember the name) to make it so that GWT could break the code up into modules, and only load up the portions that actually need to be used. Since the wave ui javascript file was originally over 1MB (minified and compressed), that was pretty important.
Since only certain waves and wavelets would be visible at a time, they actually used some complex techniques to reuse the same DOM elements. So as you scroll down through your list of waves, it's actually taking the DOM element representing the wave at the top of your inbox, changing the information inside, and moving it to the bottom of your scroll area, leaving a blank space in the part of the scroll area that you're not seeing anymore.
Additionally, I'm pretty sure they use something like Comet with JSONP to maintain continuous communication with the server, so they're not polling the server constantly for new updates, but rather there's a dynamically-generated javascript file that's being loaded in incrementally from the server, which contains instructions to fire whatever events the server has decided need to be fired.
We use large background images (hi-res photos, up to 700 KB) for our page design.
It's part of the experience of the site that as you browse around, you see different images.
At the moment a different (random) image is loaded on each page request, from a pool of ~15 images, which could grow over time.
I'm looking for a sane way to optimize this:
To avoid the user having to download a big image file on every page view
To reduce load on the server (is this an issue, will the server keep the images in memory?)
The ideas I have so far include:
A timer which loads a different image at set intervals
Progressively loading other images in the background with ajax
Associating images with specific content (pages, tags)
The question is, how to keep it feeling somewhat random, while minimizing page load times and server hit?
I usually avoid sites with huge images, I am very impatient. I would rethink your design.
As a first step you should make sure, that the images can be properly cached:
use sane urls (no session id's etc)
set appropriate http headers ETag
Firstly, hearing that the background-images alone are 700kb astounds me. In addition to the content ON screen...that is a pretty heavy site.
For starters, I would try to use image compression tools. Two tools come to mind Imagemagick and PNGCrush. PNGCrush is excellent in reducing all the extraneous metadata attached to photos, without compromising photo quality.
I only recommend this as compressing the images will assist you in enabling the user to download a smaller quantity of content, which means quicker load times, which...at the end of the day...is what users want.
I would also cache the images, such that when a user re-visits the site, the image is already cached on their end. This minimises the HTTP requests that are made each time a user visits your site.
An example of where this technique is used on a commercial site is www.reactive.com. If you look the /js/headerImages.js file, they make use of image caching. Funnily enough, you will find the same src code at: http://javascript.internet.com/miscellaneous/random-image.html
Considering that you have mentioned that images are randomly loaded, I am assuming you are using a Javascript library such as jQuery to create the effect.
If you are, you can minimize page load times by using a CDN as opposed to referencing to a local copy of the jQuery lib which is stored on your server. I have performed performance testing on a site I made for a client, and over an average of 20 hits, saved 1.6 seconds through this technique!
Hope that helps for now :)