This is follow up question to this question: mclapply vs for loops for plotting: speed and scalability focus
If I have a bunch of images in the pattern "n_mu_stdev.jpeg", as given in Chase's solution to the above question, is there an R based method that allows some sort of slideshow with a slider according to the parameter values? From the packages that I have looked at most of the image readers tend read in the image as a data file and then re-plot it rather than just displaying a raw image, thus slowing it down dramatically...
I am not amazingly familiar with JavaScript, CSS or jQuery so if that is the proposed solution, then would probably need a step by step guide in terms of what to install etc.
one other solution i had considered was potentially uploading all individual images upon generation to a photo-sharing site like flickr or imgur (e.g kinda like in knitr), keeping note of the urls and then all the sliders do is choose which row of a dataframe to look at, which provides the correct url from which to get and display the image?...
use jquery
Read the documentations
jquery library: http://jquery.com/
jquery slideshow plugin: http://www.twospy.com/galleriffic/
Related
I have 4 sizes for a single image in a page of my eCommerce website.
600x600px , 350x350px , 220x220px , 110x110px
There are 3 solutions:
1- Loading the big image (600x600px) from server and cache it, then generating thumbnails using the cached one by a client-side plugin.
2- Loading the big image and thumbnails all from server. (in this case, thumbnails are generated in server)
3- Loading the big image and create thumbnails by resizing the big one using CSS. (or for example we can load 600x600px and 350x350px ones and create thumbnails by css from 350x350px one)
Which solution is the best for SEO ?
Or if there is any other way, I appreciate.
My consideration regarding your solutions, assuming you are building a "classical, Client-server paradigm" eCommerce website (not a SPA application).
I believe this solution involve some JavaScript for the
re-sizing, so image won't be visible to a Search Engine Crawlers (or
will be more difficult their indexation).
This seems the best approach. Thumbnails are generated at server side and rendered in the HTML at user/client request. Page
will be crawled by Search Engines together with your HTML for their
indexes. There is also less overhead at client side (performance) as
not dynamic image scaling is required.
The big image could potentially slow down downloading of your
page (depends of many factors), and could make your web page score
less in Search Engine algorithm. Also consider some user which can
access your page from mobile devices, speed of downloading it is
very important.
For SEO, please also consider the folowing:
Include a meaningful subject in image alt text.
Image captions are important because they are one of the most well-read pieces of content.
Use File Name using relevant keywords.
More from a reputable website:
http://searchenginewatch.com/sew/opinion/2120682/ranking-image-search
I'm Wondering what are some best practices to decrease page load time of single page websites, and doing so in a way that won't hurt with SEO.
I'm leaning toward an ajax solution with "hijax linking", but I'm wondering what are some best practices in terms of the load order for a page. So for instance, say I have a simple webpage- has home, about, pictures of my cat, contact etc. and I'm planning to have it all show up on the homepage via vertical scrolling-alotting one "screen" worth of content per item.
I'm coding this in wordpress, so my main idea would be to first load the first "screen" i.e. hero section of homepage, as part of the home.php, so the user doesn't have to wait for the whole thing-and SEO. Then once that has finished loading, to load the next four via ajax, in the background. So I'm wondering what the best strategy might be to go about that. Someone provided this answer elsewhere:
"Build a standard 5 page site using php with proper separation of header, footer, content. Then use javascript to redirect to a single (separate) page with all content include()ed on the page."
In wordpress I'd take this to mean. Create a seperate page with a loop the grabs the other four "screens" as posts. and then load this page, after home.php has loaded.. Does anyone see any issues with this approach, or as the question asks, have any better or best practices to accomplish this, I'd appreciate them. Thanks.
There are several things you can do:
Need to improve the performance of your back end code in case there
is any.
Pagination: split page in smaller pages
Caching
Decrease the size of content, decrease the size of background images, compress js content
Compress Content
Most of the time the perfect optimization will depend on your situation. To start with one of the above will do it for you.
Your question is tagged with "wordpress". Therefore, I am assuming that you use wordpress.
if so, what I would think as logical starting point is to use one of the wordpress caching plugins. I use Quick Cache for my website and it makes significant difference.
But, you shouldn't stop with the plugin. Consider the quality of the theme you are using. You must be sure that the theme is good quality. Poorly designed themes may make inefficient database call and may slow your website.
delaying and Loading part of the page with ajax shouldn't be your first optimization action. Try all the other options first.
I have a website made to provide free web-based tools for making indie games. Currently, it only supports artists contributing to games. The features for helping artists consist of a set of artist community tools that allow artists to upload images based on a description, then we post that image in a gallery page. Other artists can upload their images and each image can have several revisions.
The way I chose to implement the image upload and display feature is by serializing uploaded images to a byte array and storing it in the database. When I need to display the image in the UI I just call a controller action I named "GetScaledGalleryImage" and pass in the image ID. That controller action takes the binary from the database and converts it back into an image, returning the requested image back.
This works very well functionally, but the problem I realized later is that the google crawler thinks all of my images are named "GetScaledGalleryImage" so if someone searches for "sylph" on google images, nothing comes up from my site, but if someone searches for site:watermintstudios.com getscaledgalleryimage, all of my images come up.
Here is an example of the URL that is being output in my HTML http://watermintstudios.com/EarnAMint/GetScaledMedia/68?scale=128
In the past, pre-MVC I would handle 404 errors and return content based on what was requested even if the page didn't actually exist. This would of course allow me to have the images pulled back by the image name (or description).
Is that the best way to do this? Or is there a better option? Something simpler would be better like if I could just do http://watermintstudios.com/EarnAMint/GetScaledMedia/Iris%20Doll?id=68&scale=128, but based on how google indexes images, would that give me what I need? Or do I need to provide image file extensions for maximum indexability?
Thanks all
It is important when doing Search Engine Optimization to always use alt="this is a crazy robot" for your images. This will help the crawler identify them. Note: always use alt, don't always name your images this is a crazy robot.
In the past, I created some divs to act like articles. Now I am thinking about changing it to HTML5 tag article...
Is there an important diference (in terms of efficiency) between using HTML elements or using equivalent divs created by the user?
For example: Will the browser load the pages faster if they are built only with HTML elements?
Short answer: No.
Long answer: maybe, if it will decrease the amount of markup you use. But not likely.
The benefit of using semantic tags is to add more meaning to the markup, not improve performance.
May be. When you cretae a div and add styling to it, the browser needs to first interpret the element and then process the style over it and render it. If you use the appropriate HTML element, it would put less burden on the rendering engine.
I have a web page loaded up in the browser (i.e. its DOM and element positioning are both accessible to me) and I want to find the block element (or a sorted list of these elements), which likely contains the most content (as in a continuous block of text). The goal is to exclude things like menus, headers, footers and such.
This is my personal favorite: VIPS: a Vision-based Page Segmentation Algorithm
First, if you need to parse a web page, I would use HTMLAgilityPack to transform it to an XML. It will speed everything and will enable you, using a simple XPath to go directly to the BODY.
After that, you have to run on all the divs (You can get all the DIV elements in a list from the agility pack), and get whatever you want.
There's a simple technique to do this,based on analysing how "noisy" HTML is, i.e., what is the ratio of markup to displayed text through an html page. The Easy Way to Extract Useful Text from Arbitrary HTML describes this tex, giving some python code to illustrate.
Cf. also the HTML::ContentExtractor Perl module, which implements this idea. It would make sense to clean the html first, if you wanted to use this, using beautifulsoup.
I would recommend Vit Baisa's thesis on Web Content Cleaning, I think he has some code too, but I can't find a link for it. There is also a discussion of the very same problem on the natural language processing LingPipe blog.