I have compressed my images from 80-300 kbs back to 4-12 kbs in https://tinypng.com/ and replaced them in the joomla library (3.6.5), they keep the same size when inside the library, but when downloading from the front-end they are back to their original size.
The tool https://developers.google.com/speed/pagespeed/insights still gives the same message that my images are too big and show the new compressed images with their old sizes.
What did I do wrong?
If Google's PageSpeed tool didn't show the same old images, it might have been a local caching problem. However, since it shows the bigger file sizes, that can be eliminated as possible cause.
Therefore: Your server still has the old (uncompressed) images. Your error was with the upload / replacing of those images.
A bit of background to find the root cause: Unlike some other CMS systems, Joomla does not touch the images when serving a page. They are stored on your webserver exactly as served to the client, and are served by Apache (or whatever other webserver you might be running). You can use an FTP client to check the file sizes directly.
Related
I have read many questions/comments regarding saving the image in DB or file system on server side. However i'm still confused. For now I allow user to upload image (limit to 10MB) and I save the image in the server folder and serve the image via apache context path configuration pointed to that location. However, due to the numbers of image and high load. We want to provide load balancing and fail over functionality. So I have 2 options.
Add code to replicate the uploaded image to all servers or using rsync to do that.
Using CouchDB or MongoDB and save the image as attachment of an document. So I have out of the box replicate functionality.
Can anyone show me the pros/cons of these approach. Can CouchDB/MongoDB have the same read performance compared to file system ?
You can also store files in distributed file system. The benefit over DB supported image server is you do not have to alter the application. Obviously, storing all the data the same way, including images, may be a benefit for you, but changing architecture for already working system may also be problematic.
For example, GlusterFS may be installed on top of "normal" file system to give you distributed features minimizing changes to the system itself. It is supposed to support via its plugins (translators) all the feature you would potentially expect from cloud system: replication, load balancing, stripping of files into relocated parts and fail-over.
Can CouchDB/MongoDB have the same read performance compared to file system ?
No, there will be lag between file system timers and database timers, this is an unfortunately reality.
I have no idea of your current setup, load and performance so I cannot really advise on what to do, however, Apache isn't really a good image server anyway.
Your best bet might be to look into a CDN cache for your images.
I might be asking a dumb question, but I am trying to understand the necessary security precautions I need to take to allow users to upload images (png, jpg, gif) to S3, and serve it via a absolute url hosted on ec2 for facebook canvas. I have the bucket name and file name stored in RDS and plan on showing images via a call to absolute address in the canvas.
I realize that for picture uploads, at minimum, there needs to be a check for:
1) file type (jpg, png, gif),
2) file size (< 5 mb),
3) mime type?
My question is that since the files are stored on s3 and only file name and bucket name is physically on the server, are additional security precautions necessary? I read elsewhere that I should run it through gd or imagemagick and resize, etc. and I am concerned that might be overkill and tax server resources.. I realize that file upload security is very very difficult, and any help would be greatly appreciated.
Thank you in advance.
I would advise not running it through GD and Imagemagick as you say, because of system usage and just unnecessary processes. S3 is really fast. Especially from another box on the Amazon system.
I ended up resizing graphics because we didn't need them that large, and we didn't want to pay for storing large images, but that's not a security issue, just a $$/payment issue.
The security isn't hard. Your EC2 instance, if referred to by your Facebook App, doesn't need any security beyond read. Your server code/app would do the write permissions. When you upload the image using the S3 object (if this is in PHP) you simply set the permissions of the file when you upload.
I have a bunch of Facebook apps that post images to S3 with no problem. It's a great architecture and works well.
I have a website centered around an online chat application where each user can have up to several hundred contacts. Each contact has there own profile image. I want to make it so that the contact's profile image is loaded next to there name. However, having the user download 100+ images every time they load the site seems intensive (Studies have shown that as much as 40% of users don't utilize there cache). Each image is around 60x60 pixels in dimension.
When I search on google or sign on to facebook, dozens of images are served nearly instantaneously. Beyond just having fast servers and a good connection, what are the optimal methods for delivering so many images to the user?
Possible approaches I have come up with are:
Storing each user's profile image in a database, constructing one image in a php file, than having the user download that, then using css to display each profile image. However, this seems extremely intense on the server and referencing such a large file so many times might take a toll on the user's browser.
Using nginx rather than apache to server the images (nginx generally works better to server static content such as this). However, this seems more like an optimization to a solution, rather than a solution in itself.
I am also aware that data can be delivered across persistent http connections so multiple requests do not have to be made to the server for multiple files. However, exactly how many files can be delivered across one persistent connection. Would this persistent model mean that just having the images load as separate files would not necessarily be a bad idea?
Any suggestions, solutions, and/or notes on personal experiences with relevant matters would be greatly appreciated. Scalability is extremely important here, as well as cross-browser support (IE7+, Opera, Firefox, Chrome, Safari)
EDIT: I AM NOT USING JQUERY.
Here's a jquery plugin that delays loading images until they're actually needed (i.e., only loads images "above the fold".)
http://www.appelsiini.net/2007/9/lazy-load-images-jquery-plugin
An alternative may be to use Flash to display just the images. The advantage is Flash is a much stronger local cache that you have programm
What techniques do people commonly use for uploading, storing and presenting images with a CMS?
Do you store them in the database or on the file system?
Do you generate thumbnails on upload? Or on the fly, then maybe cache them for reuse? Or rely on browser scaling?
Typically, most content management systems will store images the actual data of image uploads to the file systems and then add a link to the file within the database. Thumbnails can either be generated on upload or on first request (on the fly is considered inefficient, especially given the cheap cost of storage). Browser scaling is a bad idea (images may be uploaded as multi megabyte uncompressed files) but is done by some systems.
i agree with kevin. i can't think of any cms that doesn't store in the file system. then only issue that comes up with that technique is if you are planning on clustering multiple web servers to run your cms. if thats the case then you have to plan on it and have the ability to point all the web servers to the same file storage location.
the technique ive used for years is on upload, resize the image to something practical for the web, then generate the thumbnail, then write them to the file system and record the pointer in the database.
if the site is a huge site then you need serve the images from cache servers because file systems are very slow in comparison to network IO. take facebook for example, they have billions of images on their site and last i heard 80% were held in cache servers around the world in ram. the file storage array they have is more or less a backup to the cache servers.
My question is about displaying thumbnails and storage.
Let's say I have a website where users can upload photos and view them in albums.
How are the photos usually stored in this scenario? Are the images themselves or are the file paths usually stored in the database?
If the photos are large and you want to display thumbnails, is it better to:
save a copy of the image and a reduced size image, only displaying the larger if requested?
use HTML to reduce the size?
It's almost always a bad idea to store images in a database. BLOBs can really slow down a database something fierce. It also limits your ability to spread storage around different drives. When the files are separate, you can even have one or more separate image servers to reduce the load on the main dynamic server. My recommendations are:
In your database table, have columns for both the directory the image resides in and the image name. That way you are free to change where images are stored, round-robin drives, add more storage later and put new images in the new storage, or whatever you want. Storing the path and the filename in separate fields makes it trivial to move images from one directory to another.
You definitely want to generate thumbnail images to reduce your network bandwidth and make your application run faster. However, you can generate the thumbnails on demand, or when the system load is low. If you're on Linux, ImageMagick is wonderful at automated batch resizing of images. It can even resize by a percentage instead of an absolute amount.
Some software such as TikiWiki stores the photos in a database. It then also caches thumbnail sized photos in the database.
Other software stores it in a directory. This is the way Gallery2 operates. I find the directory approach more scaleable. If a different size than the original is requested, typically the app will use ImageMagick to resize the photo, and then store a copy of the resized photo.
Another alternative is to re-upload the photo to a service like S3, and not store the photo locally at all.
This is common question and the basic answer is that it depends. You need to give more information. What database are you planning on using? SQL Server 2008 has some good new features for handling this scenario with FILESTREAM function. Generally I prefer to put them in the database, but if you just stuff them in their without thinking about design and access requirements you could have poor performance as the number of photos increases.
IF you are absolutely positively sure that your web server will always have access to the file system hosting the images, then go that route. Maybe.
However, if at any time you think you might need to, i don't know, create an image server because the hard drive on your web server is running out of space OR that you need to run multiple web servers, then save yourself the trouble and store them in a database. The hard part in storing on a file system is the security requirements of crossing the network.
Also, bear in mind that not all database servers are created equal in this regard. SQL 2008 introduced a FILESTREAM data type which actually stores the images on the local file system while allowing all read / write access through the db server. This has the added benefit of allowing you to run virus scanners on the incoming files while in storage.
Oracle has had some nice file storage facilities for awhile now. MySQL? I don't think I'd want to try, but you might be okay.
As to the second question: save a thumbnail along with the image. This process occurs only once per image and saves on presentation bandwidth. Using HTML to size an image down really does nothing for the client.