This is the first time I ask question here,I'm sorry about my english skill.
My useage beblow:
We use websocket for server to communicat with client.The server send image to client and eachtime there just a little difference between the images.So,How to display the images with the highest performance?
I think,I can get the difference bytes from the typed array of image,and just replace them,but how to implement this?
If your server knows what is different from what came before, then it can create a transparent image that only has the different bits in it and you can just stack that image on top of the previous image(s). In the places that it is the same as before, it can just be transparent (no image data). The relatively small amount of different data should compress into a pretty small image file.
Related
I have 1500 pictures that need the address where they were taken to be shown in the corner of the picture. I have the pictures geo-tagged.
I need help extracting the GPS data and converting that to an address.
Then getting that address and saving it into the picture in the bottom right corner. Can anyone help or point me in the right direction please?
You're going to need two things. First you need an application that will extract the EXIF data that you are interested in. You should be able to write this yourself as it is fairly simple to do. You will need the JPEG standard and just need enough of it to identify the markers; specifically the APPn markers. You are also going to need the EXIF and (possibly the) TIFF standards to figure out how to extract the data you need form the EXIF APPn marker.
Writing the information to the corner of the image is the tough part. There are probably command line applications that will allow you to do that already. If worst comes to worst, there are various language API's that will allow you to read a JPEG stream into a buffer; draw text to the buffer; then write the buffer back to a JPEG stream.
You will most likely need to use a programming language for this; I think Python would be suitable as it's easy to get started and has libraries needed for your task.
For example, in order to extract the location (coordinates) from the JPEG files you can use pyexiv2.
To transform those coordinates to addresses you need to use a geocoding service such as Google's Geocoding API - you can use their Python library directly or code your own using something like requests.
Now that you have the address data you can overlay that data onto images using Python's pillow library.
If you're looking for some code to get started let me shamelessly plug my own project called photomap; you can find code to read GPS information from images here: https://github.com/iticus/photomap/blob/master/handlers.py#L170
In our web application, the users need to review a large number of images. This is my current layout. 20 images will be displayed at a time, with a pagination bar above the thumbnails. Clicking a thumbnail will show the enlarged image to the left. The enlarged image will follow the scrollbar so it's always visible. Quite simple actually.
I was wondering what the best interface would be in this scenario:
One option is to implement an infinite scroll script which will lazy load thumbnails as the user scrolls. The thumbnails not visible will be removed from the DOM. But my concern with this approach is the number of changes in the DOM slowing down the page.
Another option could be something like Google's Fastflip.
What do you think is the best approach for this application? Radical ideas welcomed.
I think the question you have to ask is: what action is user supposed to do? What's the purpose of the site?
If "review images" entails rating every image, I'd rather go with a Fastflip approach where the focus is on the single image. A thumbnail gallery will distract from the desired action and might result in a smaller amount of pics rated/reviewed.
If the focus should rather be on the comparison of a given image against others, I'd say try the gallery approach, although I wouldn't impement an infinite scroll with thumbnails because user can quickly get lost in the abundance of choices. I think a standard pagination (whether static or ajaxified) would be better if you choose to go this route.
Just my 2c.
If you paginate thumbnails, you can pre-generate a single image containing all thumbnails for each page, then use an image map to handle mouseover text and clicking. This will reduce the number of HTTP requests and possibly lead to fewer bytes sent. The separation distance between images should be minimized for this to be most efficient. This would have some disadvantages.
To reduce image download size at the expense of preprocessing, you can try to save each image in the format (PNG or JPG) most efficient for its contents using an algorithm like the one in ImageGuide. Similarly, if the images are poorly compressed (like JPEGs from a cell phone camera), they can be recompressed at the cost of some quality.
Once the site has some testers, you can analyze patterns in which images tend to be clicked (if a pattern exists) and preload the full-size images, or even pre-load all of them once the thumbs are loaded.
You might play with JPEG2000 images (you did say "radical ideas welcomed"), which thumb very easily, because the thumbnail and main image needn't be sent as if they are separate files. This is an advantage of the compression format -- it isn't the same as the hack of telling the browser to resize the full size image to represent its own thumbnail.
You can take a look at Google's WebP image format.
At the server side, a separate image server optimized for static content delivery, perhaps using NginX or the Tux webserver.
I would show the thumbnails, since the user might want to skip some of the pictures. I would also stay away of pagination in the terms of
<<first <previuos n of x next> last>>
and go for something more easy to implement and efficient. A
load x more pictures.
No infinite scroll whatsoever and why not, even no scroll at all. Just load x more, previous x.
Although this answer might be a bit unradical and boring, I'd go with exactly your suggestion of asynchronously loading the thumbnails (and of course main picture), if they come into view. A similar technique is used by Google+ in the pane to add persons to circles. This way, you keep the server resources and bandwidth on the pictures that are needed by the client. As Google+ shows, the operations on the DOM tree are fast enough and don't slow down a computer of the past years.
You might also prebuilt a few lines of the thumbnail table ahead with a dummy image (e.g. a "loading circle" animated gif) and replace the image. That way, the table in view is already built and does not need to be rerendered, as the flow elements following the table would have to be, if no images are in there during scrolling.
Instead of paginating the thumbnails (as suggested by your layout scheme), you could also think about letting users filter the images by tag, theme, category, size or any other way to find their results faster.
This may well be a little of an open-ended question
The site I am working on requires to be optimised for performance. One of the key areas is to optimise the file sizes of the images used upon the site.
Unfortunatley these images are being created by employees who do not have the required knowledge for creating images for the web, and it is my job to produce a set of guidelines for them to use.
I was wondering whether there was any resource/guidlines/literature regarding typical images file sizes for images of different dimensions - as I would like to include something like this to aid them to ensure their images are being created properly.
Any info would be greatly appreciated.
Thanks in advance
I can't answer the opinion question, but I can suggest some guidelines that will keep your images smaller.
First off, if they're using Photoshop to edit their images, it's likely they're storing a whole bunch of crap in the headers (digital papertrail, EXIF data, and such). Also, folks will frequently save in too high a bit depth.
For novice users, trying to explain why they need to use "save for web" is more likely to confuse them. Instead, just point them at:
http://www.smushit.com/ysmush.it/
This site is rather handy - it will compress all the images on a page you specify, or you can upload the images.
You should strongly consider writing some guidelines about where images are stored as well. It's frequently very beneficial to have your static image content stored on several servers, apart from your dynamic content. Most browsers will only download a limited # of files at a time from any given website (usually it's 2).
Unless there's a good reason, all your images should be cached using one of the HTTP cache techniques (expires, etags, etc).
Good luck.
72 dpi as a resolution and either jpeg or png formats work best.
Try to use images at the exact pixel area size they will end up being displayed as. This is specified by the images height and width attributes.
You can set the output quality of a jpeg image which will also save file size although there is a trade off against image quality.
I hope this is of use.
We would like to display very large (50mb plus) images in Internet Explorer. We would like to avoid compression as compression algorithms are not what CSI would have us believe that they are and the resulting files are too lossy.
As a result, we have come up with two options: Silverlight Deep Zoom or a Flash based solution (such as Zoomify). The issue is that both of these require conversion to a tiled output and/or conversion to a specific file type (Zoomify supports a single proprietary file type, PFF).
What we are wondering is if a solution exists which will allow us to view the image without a conversion before hand.
PS: I know that you can write an application to tile the images (as needed or after the load process) and output them; however, we would like to do this without chopping up the file.
The tiled approach really is the right way to do it.
Your users don't want to download a 50mb file before they can start viewing the image. You don't want to spend the bandwidth to serve 50 megs to every user who might only view a fraction of your image.
If you serve the whole file, users will eventually be able to load and view it, but it won't run smoothly for most of them.
There is no simple non-tiled way to serve just a portion of an image unless you want to use a server-side library like imagemagik or PIL to extract a specific subset of the image for each user. You probably don't want to do that because it will place a significant load on your server.
Alternatively, you might use something like google's map tool to provide zooming and scaling. Some comments on doing that are available here:
http://webtide.wordpress.com/2008/08/27/custom-google-maps/
Take a look at OpenSeadragon. To make a image can work with OpenSeadragon, you should generate a zoomable image format which mentioned here. Then follow starting guide here
The browser isn't going to smoothly load a 50 meg file; if you don't chop it up, there's no reasonable way to make it not lag.
If you dont want to tile, you could have the server open the file and render a screen sized view of the image for display in the browser at the particular zoom resolution requested. This way you arent sending 50 meg files across the line when someone only wants to get an overview of the image. That is, the browser requests a set of coordinates and an output size in pixels, the server opens the larger image and creates a smaller image that fits the desired view, and sends that back to the web browser.
As far as compression, you say its too lossy, but if thats what you are seeing you are probably using the wrong compression algorithm or setting for the type of image you have. The jpg format has quality settings to control lossiness, and PNG compression is lossless (the pixels you get after decompressing are the exact values you had prior to compression). So consider changing what you are using as compression, and dont just rely on the default settings in an image editor.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
What is the name of the technology behind Google Maps which allows the server to send only the part of the map requested from the user to enhance the performance, and is there any library to handle this?
The technology could generically be described as a map server. The map server generates a map for the requested location from a large set of pre-generated map tile images covering the entire planet. The map server may overlay data from other databases on top of this. The combination of a map viewer client and geographical database is traditionally called a Geographical Information System (GIS).
Anyone can write web applications that embed Google maps using the Google Maps API.There is also a fine open source map server (called MapServer) should you wish to deploy your own map server.
As stated, Google generated all of these 256x256 tiles and is just serving the relevant tiles. From your comments it seems that you are looking for something to generate these tiles for you. Several people have written code to chop an image into tiles - for instance http://crazedmonkey.com/blog/googletilecutter or http://www.klokan.cz/projects/gdal2tiles/ both seem to be able to do what your looking for.
If you look at the link for a google maps page it will look like this:
http://maps.google.com/maps?f=q&hl=en&sll=37.0625,-95.677068&sspn=53.345014,88.769531&ie=UTF8&ll=41.226264,-81.454246&spn=0.012507,0.021672&z=16
The javascript code on the page and the server code use the numbers in the link to determine the location of the map you are viewing, the zoom level, and the size of your viewing window to determine the tiles to send to your browser.
There are commercial libraries that can provide the mapping data as well as tools to display and navigate the data. One I've seen used before is Geomicro
This is something that you can try out yourself with OpenSource,
http://www.geoserver.org
http://www.openlayers.org
and
last but not least
http://geowebcache.org/
You should be able to setup a minimal environment that does something similar to maps.google in a couple of hours.
You can also use the Google Maps API with your own images. Of course, they don't need to be a map; they can be any images. This will allow the user to drag and zoom, like in Google maps.
Here's a nice rundown of an open source stack for generating Web-based maps from one of the founders of EveryBlock.com: http://www.alistapart.com/articles/takecontrolofyourmaps
The generic name for the underlying discipline is GIS.
Are you asking for more details out of general curiosity, or do you have a specific technical need for a project?
Google gets high definition satellite shots from services that sell these images, they then store and crop this images and serve only those that are required when you look at a certain point. That is, have you noticed when you zoom-in and out that you get to see squared tiles appearing? those are the ones Google Server is serving you.
You also have to consider how they handle the load with the Google File System and MapReduce
It's just a huge image consisting of square chunks that are downloaded indepedently (using AJAX and so on). I believe it's done by some kind of internal Google libraries (could be also GWT).
More on this topic:
http://blog.grimpoteuthis.org/2005/02/mapping-google.html
Google Maps and Google Earth use something known as KML, or "Keyhole Markup Language", which is a special variant of XML. It's named in tribute to the first geo-tracking satellites. You can store information on a location in Google Earth (and it will eventuall trickle down to Google Maps) by using this markup to geocode its specific latitude and longitude coordinates. You can even include altitude.
Not to answer the question, just broader the information. Microsoft has something called "Deep zoom" for Silverlight that makes it easy to do that kind of effect.
Its a free composer where you tile upp your pictures (or one big picture) and do some other settings, then it breaks it down to a lots of smaller pictures in subfolders, one folder for each zoom-level. And then creates a page that can consume those in a smooth way.
A good blog entry about it:
http://weblogs.asp.net/jgalloway/archive/2008/03/21/why-silverlight-2-deep-zoom-really-is-something-new.aspx
I'm working on a cross browser viewer for very large historic plans and scetches. A good help for the first steps (an old blog) I found at http://www.cadmaps.com/gisblog/?p=7
to understand image pyramids (that's what Google Maps works with).
With a 'tiler' I produce a lot of images like testImage_0001111100.png. 0001111100 is i.e. 5th zoom level and x / y position in the image pyramid.
Most of calculation (neighbor images, image stack up and down) is done serverside by php called by ajax requests.
I'm struggling in the moment with (not insolvable) problems in smooth shifting and zooming. That's my problem - but read the article.
AJAX allows you to update part of the page from the javascript. Basically the javascript makes a request back to the webserver and replaces part of the existing page with the result.
JQuery is one library that makes this easier. I don't know what google uses.