404 Page Not Loading Images (File Source / Layers Problem) - image

I'm facing what could be a simple problem, but haven't been able to find any solutions yet.
I just recently built a simple 404 page on my website (nofound.html) which through a .htaccess redirection (ErrorDocument 404 /nofound.html) allows me to catch all URL errors and such. Basic.
The problem is that, since the 404 response can be called within different directories, say for example index, or /dir1, or /dir1/dir2, etc, the page is not loading its styles and images right, since the source paths do not link correctly (image.png links correctly from index/nofound.html, but should become ../image.png and so on when going into different directory layers).
I've managed to make the styles load correctly by loading the same stylesheet twice (styles.css AND ../styles.css), but for the images, I have not found any workaround yet (other than duplicating the image within layered directories, which is cumbersome and redundant).
Any thoughts? Thanks in advance!

Although it does not follow best practices, my final solution was to simply use absolute hyperlinks (such as https://website/img.png or something like that) rather than relative links within the website's anchors (../../img.png)
As such, every element on the page is loaded from its absolute -actual- directory, regardless of the relative relationship between the nofound.html result and the rest of the site architecture.

Related

Modify Wordpress plugin?

I have a third-party WP plugin which, although no longer updated, still works fine - and for which I've not been able to find an alternative.
It's 'Zajax' - which ajax-loads internal pages... thus enabling a streaming-radio audio-player to be fixed to the viewport-base, with continuous play throughout page-changes.
However, it appears to require absolute urls - on root-relative urls it reloads the whole page (and thus stops continuous-play).
This is a hindrance, because I normally use root-relative urls - and hence sometimes forget to ensure that all internal urls are absolute rather than root-relative.
I want to modify, so that it'll work with root-relative urls - but don't know enough to do this.
Actually using root-relative URL-s is not good idea, but if it is comfortable for you, then use small jQuery snippet which may help you with the problem.
jQuery("a").each(function(){
if (jQuery(this).attr("href").indexOf("http")==-1){
jQuery(this).attr("href","https://yourwebsiteurl.com/"+jQuery(this).attr("href"));
}
});
You can put this code to footer area of your website, it will detect root-related links and convert them to normal links. (without changing anything at your backend, of course)

Lightbox2: Display other picture if named image is lost

I have a lot galleries displayed with Lightbox2 and it works fine.
Now I want to delete the larger version of the pictures, but keep the gallery with the thumbnails for visitors.
How can I manage, that lightbox2 displays an alternative image, if the given file in the html is not existing?
I couldn't find an option in lightbox.js to handle with missing targets.
I had the same question, but after a little research I decided that Lightbox2 is not the right place to handle missing images. Instead, that should be handled at the server or application level.
The web server will respond with a 404 error for any missing resource, whether a web page, image, or anything else. In most cases, it also returns a small HTML page to alert the user (such as this example at Google).
You can usually configure your server or application to return a default 404-style image instead of an HTML page if the requested resource was an image. That will then be displayed to the user instead of the broken image symbol.
How you do this of course depends on the particular server/application stack you are using, but here is a good solution for Apache.

Does use of echo base_url(); to call CSS, images and Javascript files make a website slow?

I am using Codeigniter. I am keeping my images,CSS and Javascript files in a folder called "support" in the document root of my application. So my document root folder looks like this-
.settings
application
support
system
.buildpath
.project
index
.htacces
Now my question is will it make my website take time to load as I have to use <? echo base_url();?>support/ every time I need to get something from my support folder? Because you see when I am using <? echo base_url();?>I am actually calling the full website address.. and I have 7 CSS and 13 javascript files to call from "support" so it will definitely take time to load the website. (Please correct me if I am wrong). If you think by this a website can get slow could you please tell me where exactly should I put my CSS,images and javascript files in. I heard views is not a good place for this.
Thanks in Advance :)
This question is probably bigger than you think.
First of all, using <? echo base_url();?> instead of "hard-coding" your web address will not slow down your site. A function call like this is very negligible to the speed of loading your pages.
I think the other part of your question is regarding architecture.
When you think of speed for your website, you need to know what factors slow down the loading of your page. (Although not an exhaustive list, this will help in your case):
the number of files (images, css, javascript, etc.) that need to load for your page
the cache-ability of those files
some server side header nonsense (e-tags and so-forth)
the processing to build your php pages
the size of your page
Now, in your instance, I would recommend putting all of your "static" files in the document root under a folder (say static). Then, access them all in your "views" with the base_url() function.
This way, your page as it's delivered to the browser, will make external calls for those static files - allowing the browser to cache all of those files (assuming the headers are set up correctly). If you put them into views, then they're actually added to the page that is being requested. So, the next page that is requested has to download those files again along with that second page being requested. Make sense?
To help with the "number of files", you can always concatenate and minify any css/javascript that you have. So instead of the browser downloading and caching 8 js files, you can serve it 1 js file with all of your code.

How do I create a changing image for my website?

Not long ago I came across this website: http://www.danasoft.com/
This websites provides dynamically updating signatures which are pretty cool in my opinion.
There is just one thing that I don't get and would really like to know how to do.
Here's a direct link to an image on the website: http://www.danasoft.com/vipersig.jpg Try refreshing. Notice it changes? How do I achieve that? How do I have a direct link to a file like www.mypage.com/thing.jpeg output different images each time?
Basically, the URL is not actually retrieving the file directly each time, but rather the server is intercepting that URL and serving a (possibly random) image from a larger set of images. Depending on whether the server is running Apache, IIS, etc, the implementation could vary... This could also probably be achieved with the MVC routing engine by defining a custom route handler for URLs ending in '.jpg', but I'm not actually sure.
EDIT:
See this discussion for more detail on the MVC implementation.

In a sitemap, is it advisable to include links to every page on the site, or only ones that need it?

I'm in the process of creating a sitemap for my website. I'm doing this because I have a large number of pages that can only be reached via a search form normally by users.
I've created an automated method for pulling the links out of the database and compiling them into a sitemap. However, for all the pages that are regularly accessible, and do not live in the database, I would have to manually go through and add these to the sitemap.
It strikes me that the regular pages are those that get found anyway by ordinary crawlers, so it seems like a hassle manually adding in those pages, and then making sure the sitemap keeps up to date on any changes to them.
Is it a bad to just leave those out, if they're already being indexed, and have my sitemap only contain my dynamic pages?
Google will crawl any URLs (as allowed by robots.txt) it discovers, even if they are not in the sitemap. So long as your static pages are all reachable from the other pages in your sitemap, it is fine to exclude them. However, there are other features of sitemap XML that may incentivize you to include static URLs in your sitemap (such as modification dates and priorities).
If you're willing to write a script to automatically generate a sitemap for database entries, then take it one step further and make your script also generate entries for static pages. This could be as simple as searching through the webroot and looking for *.html files. Or if you are using a framework, iterate over your framework's static routes.
Yes, I think it is not a good to leave them out. I think it would also be advisable to look for a way that your search pages can be found by a crawler without a sitemap. For example, you could add some kind of advanced search page where a user can select in a form the search term. Crawlers can also fill in those forms.

Resources