Can I use clickhouse as key-value storage? - key-value-store

Is it possible to use clickhouse as key-value storage, were data is regularly overwritten, but rarely read? What engine should I use if this is possible?

ClickHouse isn't built for that use case and it deliberately says that in the home page of its document.
When NOT to use ClickHouse
Transactional workloads (OLTP)
Key-value access with high request rate
Blob or document storage
Over-normalized data
However, if the QPS is low, you can still achieve good latency scores for point queries. ClickHouse also provides multiple kinds of Dictionaries that may be better serve as the external key-value storage. There is also a StorageJoin engine that supports joinGet function similar to redis' HGET operation. After this PR you can overwrite the existing keys in StorageJoin.
update
PR is merged. Here is an isolated example.
First populate a StorageJoin table as following:
CREATE TABLE my_fancy_kv_store (s String, x Array(UInt8), k UInt64)
ENGINE = Join(ANY, LEFT, s);
INSERT INTO my_fancy_kv_store VALUES ('abc', [0], 1), ('def', [1, 2], 2);
Then you can use it as a dictionary (key-value):
SELECT joinGet('my_fancy_kv_store', 'x', 'abc');
SELECT joinGet('my_fancy_kv_store', 'k', 'def');

EmbeddedRocksDB table engine was recently added and can help.
More can be found here: https://kb.altinity.com/engines/altinity-kb-embeddedrocksdb-and-dictionary
In my tests, compared to MergeTree, I see EmbeddedRocksDB handle 10-20x higher QPS amouting to 10-100x faster response rates.
It can vary by usecase, but its good enough for me to not bother looking out for separate Redis / RocksDB / DynamoDB installations (since having KV stores inside CH helps to join across MT and EmbeddedRocksDB, and I don't really need to scale up to limits of redis / DDB)

Related

Delete rows vs Delete Columns performance

I'm creating the datamodel for a timeseries application on Cassandra 2.1.3. We will be preserving X amount of data for each user of the system and I'm wondering what is the best approach to design for this requirement.
Option1:
Use a 'bucket' in the partition key, so data for X period goes into the same row. Something like this:
((id, bucket), timestamp) -> data
I can delete a single row at once at the expense of maintaining this bucket concept. It also limits the range I can query on timestamp, probably resulting in several queries.
Option2:
Store all the data in the same row. N deletes are per column.
(id, timestamp) -> data
Range queries are easy again. But what about performance after many column deletes?
Given that we plan to use TTL to let the data expire, which of the two models would deliver the best performance? Is the tombstone overhead of Option1 << Option2 or will there be a tombstone per column on both models anyway?
I'm trying to avoid to bury myself in the tombstone graveyard.
I think it will all depend on how much data you plan on having for the given partition key you end up choosing, what your TTL is and what queries you are making.
I typically lean towards option #1, especially if your TTL is the same for all writes. In addition if you are using LeveledCompactionStrategy or DataTieredCompactionStrategy, Cassandra will do a great job keeping data from the same partition in the same SSTable, which will greatly improve read performance.
If you use Option #2, data for the same partition could likely be spread across multiple levels (if using LCS) or just in general multiple sstables, which may cause you to read from a lot of SSTables, depending on the nature of your queries. There is also the issue of hotspotting, where you could overload particular cassandra nodes if you have a really wide partition.
The other benefit of #1 (which you allude to), is that you can easily delete the entire partition, which creates a single tombstone marker which is much cheaper. Also, if you are using the same TTL, data within that partition will expire pretty much at the same time.
I do agree that it is a bit of a pain to have to make multiple queries to read across multiple partitions as it pushes some complexity into the application end. You may also need to maintain a separate table to keep track of the buckets for the given id if they can not be determined implicitly.
As far as performance goes, do you see it as likely that you will need to read cross-partitions when your application makes queries? For example, if you have a query for 'the most recent 1000 records' and a partition typically is wider than that, you may only need to make 1 query for Option #1. However, if you want to have a query like 'give me all records', Option #2 may be better as otherwise you'll need to a make queries for each bucket.
After creating the tables you described above:
CREATE TABLE option1 (
... id bigint,
... bucket bigint,
... timestamp timestamp,
... data text,
... PRIMARY KEY ((id, bucket), timestamp)
... ) WITH default_time_to_live=10;
CREATE TABLE option2 (
... id bigint,
... timestamp timestamp,
... data text,
... PRIMARY KEY (id, timestamp)
... ) WITH default_time_to_live=10;
I inserted a test row:
INSERT INTO option1 (id,bucket,timestamp,data) VALUES (1,2015,'2015-03-16 11:24:00-0500','test1');
INSERT INTO option2 (id,timestamp,data) VALUES (1,'2015-03-16 11:24:00-0500','test2');
...waited 10 seconds, queried with tracing on, and I saw identical tombstone counts for each table. So I either way that shouldn't be too much of a concern for you.
The real issue, is that if you think you'll ever hit the limit of 2 billion columns per partition, then Option #1 is the safe one. If you have a lot of data Option #1 might perform better (because you'll be eliminating the need to look at partitions that don't match your bucket), but really either one should be fine in that respect.
tl;dr;
As the issues of performance and tombstones are going to be similar no matter which option you choose, I'm thinking that Option #2 is the better one, just due to ease of querying.

Load data from Cassandra

I am using Cassandra 1.2.12 , i want to load data from cassandra using Java code, but i am forced to use limit in the query.
Using DataStax API to fetch data from Cassandra.
Lets assume keyspace as 'k' and columnfamily as 'c', read data from c on some condition which results in 10 million records, since i was getting time-out exception i limited it to 10000, and i know that i cant limit like 10001 to 20000.... and i want to load full 10 million records, How can i solve this problem.?
What you're asking about is called pagination, and you'll have to write queries with WHERE key > [some_value] to set your starting boundary for each slice you want to return. To get the correct value to use, you'll need to look at the last row returned by the previous slice.
If you're not dealing with numbers, you can use a function token() to do a range check, for example:
SELECT * FROM c WHERE token(name) > token('bob')
token() also may be required if you're paging by your partition key, which usually disallows slicing queries. For example (adapted from Datastax documentation):
CREATE TABLE c (
k int PRIMARY KEY,
v1 int,
v2 int
);
SELECT * FROM c WHERE token(k) > token(42);
Loading all of data from Cassandra is not a good option. With Kundera(supports datastax java driver), i know can set maxResults to Integer.MAX_VALUE, which would exclude LIMIT keyword while retrieving the data.
As Daniel said, probably what you are looking for is "pagination", use token() function for this and handle number of records per page pro grammatically. IMHO, high level apis should take care such things like of applying token implicitly in case of pagination required.
HTH,
-Vivek

Compound rowkey in Azure Table storage

I want to move some of my Azure SQL tables to Table storage. As far as I understand, I can save everything in the same table, seperating it using PartitionKey and keeping it unique within each partition using Rowkey.
Now, I have a table with a compound key:
ParentId: (uniqueidentifier)
ReportTime: (datetime)
I also understand RowKeys have to be strings. Will I need to combine these in a single string? Or can I combine multiple keys some other way? Do I need to make a new key perhaps?
Any help is appreciated.
UPDATE
My idea is to put data from several (three for now) database tables and put in the same storage table seperating them with the partition key.
I will query using the ParentId and a WeekNumber (another column). This table has about 1 million rows that's deleted weekly from the db. My two other tables has about 6 million and 3.5 million
This question is pretty broad and there is no right answer.
The specific question - can you use Compound Keys with Azure Table Storage. Yes, you can do that. But this involves manual Serializing / Deserializing of your object's properties. You can achieve that by overriding the TableEntity's ReadEntity and WriteEntity methods. Check this detailed blog post on how can you override these methods to use your own custom serialization/deserialization.
I will further discuss my view on your more broader question.
First of all, why you want to put data from 3 (SQL) tables into one (Azure Table)? Just have 3 Azure tables.
Second thought, as Fabrizio points out is how are you going to query the records. Because Windows Azure Table service has only one index, and that is PartitionKey + RowKey properties (columns). If you are pretty sure you will mostly query data by known PartitionKey and RowKey, then Azure Tables is perfectly suiting you! However you say that your combination for RowKey is ParentId + WeekNumber! That means that a record is uniquely identified by this combination! If it is true, then you are even more ready to go.
Next you say you are going to delete records every week! You should know that DELETE operation acts on a single entity. You can use Entity Group Transactions to DELETE multiple entities at once, but there is a limit of (a) All entities in batch operation must have the same PartitionKey, (b) The maximum number of entities per batch is 100, and (c) The maximum size of batch operation is 4MB. Say you have 1M records like you say. In order to delete them, you have to first retrieve them in groups by 100, then delete in groups by 100. These are, in best possible case 10k operations on retrieval and 10k operations on deletion. Event if it will only cost 0.002 USD, think about time taken to execute 10k operations against a REST API.
Since you have to delete entities on a regular basis, which is fixed to a WeekNumber let's say, I can suggest that you dynamically create your tables and include the week number in its name. Thus you will achieve:
Even better partitioning of information
Easier and more granular information backup / delete
Deleting millions of entities requires just one operation - delete table.
There is not an unique solution for your problem. Yes, you can use ParentID as PartitionKey and ReportTime as Rowkey (or invert the assignment). But the big 2 main questions re: how do you query your data, with what conditions? and how many data do you store? 1000, 1 million items, 1000 millions items? The total storage usage is important. But it's also very important to consider the number of transaction you will generate to the storage.

Remote key-value storage allowing indexes?

In our project we already have an embedded in-memory key-value storage for objects, and it is very usable, because it allows us to make indexes for it and query the storage based on it. So, if we have a collection of "Student"s, and a compound index on student.group and student.sex, then we can find all male students from group "ABC". Same for deletion and so on.
Now we have to adopt our service for working in a cloud, so that there will be multiple servers, processing user requests, and they have a shared state, stored in this key-value indexed storage. We tried to adopt memcashed for our needs, and it's almost ideal -- it is fast, simple and proven solution, but it doesn't have indexes, so we can't used to search our temporary data.
Is there any other way to have a remote cache, just like the memcashed, but with indexes?
Thank you.
Try hazelcast, It is an in-memory data grid that distributes the data among servers. You can have indexes just like you described in your question and query for them.
Usage is very simple. Just add Hazelcast.jar and start coding. It can be both embedded and remote.
Here is the index and query usage
add index
IMap<Integer, Student> myDistributedMap = Hazelcast.getMap("students")
myDistributedMap.addIndex("group", false);
myDistributedMap.addIndex("sex", false);
store in imdg
myDistributedMap.put(student.id, student)
;
query
Collection<Student> result = myDistributedMap.values(new SqlPredicate("sex=male AND group=ABC"));
Finally it works fine in the cloud. Ex: EC2

Is there anything like memcached, but for sorted lists?

I have a situation where I could really benefit from having system like memcached, but with the ability to store (per each key) sorted list of elements, and modifying the list by addition of values.
For example:
something.add_to_sorted_list( 'topics_list_sorted_by_title', 1234, 'some_title')
something.add_to_sorted_list( 'topics_list_sorted_by_title', 5436, 'zzz')
something.add_to_sorted_list( 'topics_list_sorted_by_title', 5623, 'aaa')
Which I then could use like this:
something.get_list_size( 'topics_list_sorted_by_title' )
// returns 3
something.get_list_elements( 'topics_list_sorted_by_title', 1, 10 )
// returns: 5623, 1234, 5436
Required system would allow me to easily get items count in every array, and fetch any number of values from the array, with the assumption that the values are sorted using attached value.
I hope that the description is clear. And the question is relatively simple: is there any such system?
Take a look at MongoDB. It uses memory mapped files, so is incredibly fast and should perform at a comparative level to MemCached.
MongoDB is a schema-less database that should support what you're looking for (indexing/sorting)
Redis supports both lists and sets. You can disable disk saving and use it like Memcached instead of going for MongoDB which would save data to disk.
MongoDB will fit. What's important it has indexes, so you can add an index by title for topics collection and then retrieve items sorted by the index:
db.topics.ensureIndex({"title": 1})
db.topics.find().sort({"title": 1})
why not just store an array in memcached? at least in python and PHP the memcached APIs support this (i think python uses pickle but i dont recall for sure).
if you need permanent data storage or backup, memcacheDB uses the same API.
basic pseudopython example:
getting stored data
stored=cache.get(storedDataName)
initialize list if you have not stored anything previously
if(stored==None):
stored={}
----------------
finding stored items
try:
alreadyHaveItem=stored[itemKey]
except KeyError:
print 'no result in cached'
----------------
adding new items
for item in newItemsDict:
stored[item]=newItems[item]
----------------
saving the results in cache
cache.set(storedDataName,stored,TTL)

Resources