What's the best way to cache binary data? - caching

I pre-generate 20+ million gzipped html pages, store them on disk, and serve them with a web server. Now I need this data to be accessible by multiple web servers. Rsync-ing the files takes too long. NFS seems like it may take too long.
I considered using a key/value store like Redis, but Redis only stores strings as values, and I suspect it will choke on gzipped files.
My current thinking is to use a simple MySQL/Postgres table with a string key and a binary value. Before I implement this solution, I wanted to see if anyone else had experience in this area and could offer advice.

I've head good about Redis, that's one.
I've also heard extremely positive things about memcached. It is suitable for binary data as well.
Take Facebook for example: These guys use memcached, also for the images!
As you know, images are in binary.
So, get memcached, get a machine to utilize it, a binder for PHP or whatever you use for your sites, and off you go! Good luck!

First off, why cache the gzips? Network latency and transmission time is orders of magnitude higher than the CPU time spent compressing the file so doing it on the fly maybe the simplest solution.
However,if you definitely have a need then I'm not sure a central database is going to be any quicker than a file share (of course you should be measuring not guessing these things!). A simple approach could be to host the original files on an NFS share and let each web server gzip and cache them locally on demand. memcached (as Poni suggests) is also a good alternative, but adds a layer of complexity.

Related

Working around a very heavy encryption algorithm?

I'm in the progress of building an API in NodeJS. Our main API is built in Java in which all the ids are encrypted (one example being AA35794C728A440F).
The Node API needs to use the same encryption algorithm for compatibility.
During testing of the API, I was surprised to find that it was only able to handle somewhere in the region of 25 to 40 (depending on the AWS EC2 instance I tested with) requests per second, and that the CPU was maxing out.
Digging into it, I found the issue was with the algorithm being used, specifically that it was performing 1000 md5 operations per key per encrypt/decrypt.
Removing the encryption gave me a massive increase in throughput, up to 1200 requests per second.
I'm stuck with the algorithm - it won't be possible to change without impacting many consumers of the API, so I need to find a way to work around it.
I was wondering what the most efficient way to handle this would be, keeping in mind that I need to be able to 'encrypt' or 'decrypt'?
My question isn't so much how to make the algorithm more efficient, given that I would like to avoid the 1000 md5 ops per id, but rather, an efficient of bypassing the actual encryption itself.
I was thinking of storing all the keys (up to maybe 2 or 3 million) in a map or a tree and then doing a lookup, however that would be mean lugging around 30-50MB of ids in the repository, plus consuming a lot of memory.
It sounds like, lacking any code, that a key derivation is being done on each invocation.
Key derivations are designed to be slow. Provide more information on what you are trying to accomplish and some code.
50MB of memory for cache doesn't sound that much to me... you could also use memcache (possibly AWS ElastiCache) to do the actual caching - this way it can be easily shared across multiple servers..

Distributed cache with huge objects ~1-2GB

I have a need to stream a huge dataset, around 1-2GB, but only on demand when they explore the data. For example, if they don't explore parts of the data, I don't want to send it out.
So now, I have a solution that effectively returns JSON only for things they need. The need for a cache arises because these 1-2GB objects are actually constructed in memory from a file or files on disk, so the latency is ~30 seconds if you have to read the file to return this data.
How do I manage such a cache? Basically I think the solution is something like ZooKeeper or such where I store the physical machine name which holds the cache and then forward my rest request to that.
Would you guys also consider this to be the right model? I wonder what kind of checks will I have to do such that if the node that has the cache goes down, I can still fullfil the request without an error, but higher latencies.
Has anybody developed such a system? All the solutions out there are for seemingly small rows or objects.
https://github.com/golang/groupcache is used for bigger things, but although it's used by http://dl.google.com, I'm not sure how it would do with Multi-gigabyte objects.
On the other hand, HTTP can do partial transfers and will be very efficient at that. Take a look ar Varnish, or Nginx.

Serving millions of routes with good performance

I'm doing some research for a new project, for which the constraints and specifications have yet to be set. One thing that is wanted is a large number of paths, directly under the root domain. This could ramp up to millions of paths. The paths don't have a common structure or unique parts, so I have to look for exact matches.
Now I know it's more efficient to break up those paths, which would also help in the path lookup. However I'm researching the possibility here, so bear with me.
I'm evaluating methods to accomplish this, while maintaining excellent performance. I thought of the following methods:
Storing the paths in an SQL database and doing a lookup on every request. This seems like the worst option and will definitely not be used.
Storing the paths in a key-value store like Redis. This would be a lot better, and perform quite well I think (have to benchmark it though).
Doing string/regex matching - like many frameworks do out of the box - for this amount of possible matches is nuts and thus not really an option. But I could see how doing some sort of algorithm where you compare letter-by-letter, in combination with some smart optimizations, could work.
But maybe there are tools/methods I don't know about that are far more suited for this type of problem. I could use any tips on how to accomplish this though.
Oh and in case anyone is wondering, no this isn't homework.
UPDATE
I've tested the Redis approach. Based on two sets of keywords, I got 150 million paths. I've added each of them using the set command, with the value being a serialized string of id's I can use to identify the actual keywords in the request. (SET 'keyword1-keyword2' '<serialized_string>')
A quick test in a local VM with a data set of one million records returned promising results: benchmarking 1000 requests took 2ms on average. And this was on my laptop, which runs tons of other stuff.
Next I did a complete test on a VPS with 4 cores and 8GB of RAM, with the complete set of 150 million records. This yielded a database of 3.1G in file size, and around 9GB in memory. Since the database could not be loaded in memory entirely, Redis started swapping, which caused terrible results: around 100ms on average.
Obviously this will not work and scale nice. Either each web server needs to have a enormous amount of RAM for this, or we'll have to use a dedicated Redis-routing server. I've read an article from the engineers at Instagram, who came up with a trick to decrease the database size dramatically, but I haven't tried this yet. Either way, this does not seem the right way to do this. Back to the drawing board.
Storing the paths in an SQL database and doing a lookup on every request. This seems like the worst option and will definitely not be used.
You're probably underestimating what a database can do. Can I invite you to reconsider your position there?
For Postgres (or MySQL w/ InnoDB), a million entries is a notch above tiny. Store the whole path in a field, add an index on it, vacuum, analyze. Don't do nutty joins until you've identified the ID of your key object, and you'll be fine in terms of lookup speeds. Say a few ms when running your query from psql.
Your real issue will be the bottleneck related to disk IO if you get material amounts of traffic. The operating motto here is: the less, the better. Besides the basics such as installing APC on your php server, using Passenger if you're using Ruby, etc:
Make sure the server has plenty of RAM to fit that index.
Cache a reference to the object related to each path in memcached.
If you can categorize all routes in a dozen or so regex, they might help by allowing the use of smaller, more targeted indexes that are easier to keep in memory. If not, just stick to storing the (possibly trailing-slashed) entire path and move on.
Worry about misses. If you've a non-canonical URL that redirects to a canonical one, store the redirect in memcached without any expiration date and begone with it.
Did I mention lots of RAM and memcached?
Oh, and don't overrate that ORM you're using, either. Chances are it's taking more time to build your query than your data store is taking to parse, retrieve and return the results.
RAM... Memcached...
To be very honest, Reddis isn't so different from a SQL + memcached option, except when it comes to memory management (as you found out), sharding, replication, and syntax. And familiarity, of course.
Your key decision point (besides excluding iterating over more than a few regex) ought to be how your data is structured. If it's highly structured with critical needs for atomicity, SQL + memcached ought to be your preferred option. If you've custom fields all over and obese EAV tables, then playing with Reddis or CouchDB or another NoSQL store ought to be on your radar.
In either case, it'll help to have lots of RAM to keep those indexes in memory, and a memcached cluster in front of the whole thing will never hurt if you need to scale.
Redis is your best bet I think. SQL would be slow and regular expressions from my experience are always painfully slow in queries.
I would do the following steps to test Redis:
Fire up a Redis instance either with a local VM or in the cloud on something like EC2.
Download a dictionary or two and pump this data into Redis. For example something from here: http://wordlist.sourceforge.net/ Make sure you normalize the data. For example, always lower case the strings and remove white space at the start/end of the string, etc.
I would ignore the hash. I don't see the reason you need to hash the URL? It would be impossible to read later if you wanted to debug things and it doesn't seem to "buy" you anything. I went to http://www.sha1-online.com/, and entered ryan and got ea3cd978650417470535f3a4725b6b5042a6ab59 as the hash. The original text would be much smaller to put in RAM which will help Redis. Obviously for longer paths, the hash would be better, but your examples were very small. =)
Write a tool to read from Redis and see how well it performs.
Profit!
Keep in mind that Redis needs to keep the entire data set in RAM, so plan accordingly.
I would suggest using some kind of key-value store (i.e. a hashing store), possibly along with hashing the key so it is shorter (something like SHA-1 would be OK IMHO).

Storing images in NoSQL stores

Our application will be serving a large number of small, thumbnail-size images (about 6-12KB in size) through HTTP. I've been asked to investigate whether using a NoSQL data store is a viable solution for data storage. Ideally, we would like our data store to be fault-toerant and distributed.
Is it a good idea to store blobs in NoSQL stores, and which one is good for it? Also, is NoSQL a good solution for our problem, or would we be better served storing the images in the file system and serving them directly from the web server (as an aside, CDN is currently not an option for us)?
Whether or not to store images in a DB or the filesystem is sometime one of those "holy war" type of debates; each side feels their way of doing things is the one right way. In general:
To store in the DB:
Easier to manage back-up/replicate everything at once in one place.
Helps with your data consistency and integrity. You can set the BLOB field to disallow NULLs, but you're not going to be able to prevent an external file from being deleted. (Though this isn't applicable to NoSQL since there aren't the traditional constraints).
To store on the filesystem:
A filesystem is designed to serve files. Let it do it's job.
The DB is often your bottleneck in an application. Whatever load you can take off it, the better.
Easier to serve on a CDN (which you mentioned isn't applicable in your situation).
I tend to come down on the side of the filesystem because it scales much better. But depending on the size of your project, either choice will likely work fine. With NoSQL, the differences are even less apparent.
Mongo DB should work well for you. I haven't used it for blobs yet, but here is a nice FLOSS Weekly podcast interview with Michael Dirolf from the Mongo DB team where he addresses this use case.
I was looking for a similar solution for a personal project and came across Riak, which, to me, seems like an amazing solution to this problem. Basically, it distributes a specified number of copies of each file to the servers in the network. It is designed such that a server coming or going is no big deal. All the copies on a server that leaves are distributed amongst the others.
With the right configuration, Riak can deal with an entire datacenter crashing.
Oh, and it has commercial support available.
Well CDN would be the obvious choice. Since that's out, I'd say your best bet for fault tolerance and load balancing would be your own private data center (whatever that means to you) behind 2 or more load balancers like an F5. This will be your easiest management system and you can get as much fault tolerance as your hardware budget allows. You won't need any new software expertise, just XCOPY.
For true fault tolerance you're going to need geographic dispersion or you're subject to anyone with a backhoe.
(Gravatars?)
If you are in a Python environment, consider the y_serial module: http://yserial.sourceforge.net/
In under 10 minutes, you will be able to store and access your images (in fact, any arbitrary Python object including webpages) -- in compressed form; NoSQL.

Filesystem seek performance with lots of tiny files

I'm looking to build a server with lots of tiny files delivered by an XML API. It won't be doing a whole lot of iterating over directories or blocks of sequential files--we're talking lots and lots of seeks for discontinuous data.
Will seek time on BSD UFS degrade over time for requests for individual files? I understand that the filesystem's inode limit is based on the size of the partition/slice, but the hard drive has to step through the inode table for every file request before it can discover the location of the data. What filesystem yields the best performance for seek time?
The alternative is to setup 2-4GB "blob" files and have a separate system of seeking a file contained in them from within the software. The software's "inode table" could be optimized for delivery based on currently logged in user, etc... These "inode tables" would likely be cached in RAM and would only relate to the users currently logged in so that there are fewer wasted resources.
Where do these two solutions rate on a scalability and maintenance standpoint? What sort of performance gains, if any, could I expect by using the second solution?
The most obvious and time-proven mitigation technique is to use a good hierarchical design for directories (and pathname search strategies), and have more directories with fewer files in each.
For recent FreeBSD versions with dirhash and softupdates I have seen no problems with a few ten thousand files per directory. You probably don't want to go north of 500.000 files or so. E.g. deleting a directory with 2.500.000 files took me three days.
I'm not sure i understand you question correctly, but if you want to seek over lots of files, why not use a partioned mysql table laid out on a RAID0 or VFS filesystem?
Edit: as far as i know, lots of files in one folder will degrade any FS speed as it has to maintain bigger lists of files, permissions and names, a database is designed to keep lists of data in memory and seek in a very optimized way through it.
More details of your situation would be helpful, are the files existing or would they be created by your application? If you need a way to store arbitrary data with out the structure of a relational database have you looked at object databases
Another option, if your objects should or can be accessed via HTTP, is to use a varnish cache in front of a small web server. Initially objects would be stored on disk, but varnish would store and serve objects from memory after the first access to a given object.

Resources