Storing items in a map or in rows in Cassandra - performance

I need to store users lists by customer in cassandra. There are two basic approaches I see:
A: create table users ( // one row per user
customer int, userId int, primary key (customer, userId),
login text, name text, email text
);
or
B: create table users ( // one row per customer
customer int primary key, users map<int, text>
);
where in the second approach I would store a JSON representation of the user data as "text".
I will have the following operations on the table:
insert / update / delete single user
read all users for a customer
read a single user by id and customer
Here are the questions:
1) For large users lists, B is a bad idea. What order of magnitude would "large" be?
2) Would you expect B to have better performance for small users lists? What order of magnitude would "small" be?
3) What other advantages / disadvantages do you see for A or B?
(For those who need to know: I'm using scala / datastax driver / phantom to access the database.)

I would stick with A, definitely.
Collections can have at most 64k queryable elements so that's your hard limit. And C* reads all the collection during queries, so you want to keep the collections as empty as possible to avoid huge read penalties.
I expect the performance to be of the same order of magnitude because both are sequential reads.
In B you will use not idempotent queries to update the collection. My mistake, it's a map, not a list.
A makes very easy to update your schema. In B you'd need to read-modify-write your records.
Stick with A.

Related

Is there any way in Redis to put keys without any common regx pattern in the same hash slots?

I have the following data model for which I want to use redis as cache.
Employee: With a unique Employee_Id.
Department: With a unique Department_Id.
An employee can be part of only one department, a department can have many employees. Now, the operations the system should support are something like this.
Given a Employee_Id, find the department it is a part of.
Given a Department_Id, find the list of all it's employees.
Merge to departments, in this case employees of any one department will move to other, depending on the least no.of db operations.
I'm using DynamoDB as the persistent storage with two tables representing Employee and Department. I'm performing merge operations using dynamodb transactions, to ensure ACID.
Now, I'm planning to use redis as a cache between service and db. For each employee_Id as key, I'll store the department it is part of. For each department_id as key, I'll store the the list of members in the department. Now, for merge usecase I'll have to update the values for a no.of employee -> department mapping. For this I want to use redis transactions or operations like MSET, MGET etc.
For transactions in redis, we need to ensure that all keys are in the same hash slots. However, in our case EmployeeId(Key) are randomly generated UUID, they will not have any common regx. pattern to use for hash-tags. But, the values that they point to, i.e. Depatment_id will be common for them.
Is there any way in Redis to put keys(employee_Id) without any common regx pattern in same hash slots?
I'll put all such entries(for which I might want to perform transactions in future) in redis at the same time, hence I was thinking of appending a random string as hash-tag (between '{' and '}') to the keys but while getting value for the key, I'll not know the random the random string added, I need fetch values based on the original keys only.

Database: Storing multiple Types in single table or multiple intermediate tables for Delta Tables

Using Java and Oracle.
We need to update changes in Email, UserID of employee to third party.
Actual table is Employee and intermediate table we keep which we will use for comparison of changes before sending to third party.
Following are database designs coming in mind for intermediate table:
Only Single table:
EmployeeiD|Value|Type|UpdateDate
Value is userid or email, type will be 'email' or 'userid'. Update date is kept so to figure out that which of email or userid was different and update to third party.
Multiple Table:
Employee_EmailID
EmpId|EmailID|Updatedate
Employee_UserID
EmpId|UserID|Updatedate
Java flow will be:
Pick employee from actual table.
Pick employee from above intermediate table.
Compare differences. Update difference to third party.
Update above table with updated value and last update date.
Which one is consider as best way, single table approach or multiple table or is there any standard way to implement the same? There are 10,000 Employees in system.
Intermediate table is just storing Delta records i.e Records transferred to third party so that it can be compared next day.
Good database design has separate tables for different concepts. Using the same database column to hold different types of data will lead to code which is harder to understand, prone to data corruption and less performative.
You may think it's only two tables and a few tens of thousands of rows, so does it matter? But that is only your current requirement. What you choose now will set the template for what happens when (say) you need to add telephone numbers to the process.
Now in future if we get 5 more entities to update
Do you mean "entities", like say Customers rather than Employees? Or do you really mean "attributes" as in my example of Employee Telephone Number?
Generally speaking we have a separate table for distinct entities, and all the attributes of that entity are grouped at the same cardinality. To take your example, I would expect an Employee to have one UserID and one Email Address so I would design the table like this:
Employee_audit
EmpId|UserID|EmailID|Updatedate
That is, I have one record which stores the complete state of the Employee record at the Updatedate.
If we add a new entity, Customers then we have a new table. Simple. But a new attribute like Employee Phone Number offers a choice, because an employee can have more than one: work landline, mobile, fax, home, etc. So we could represent this in three ways: a child table with a type column, multiple child tables for each type, or as distinct columns on the Employee record.
For the main Employee table I would choose the separate table (or tables, depending on whether I'm shooting for 6NF). But for an audit table I would choose one record per Employee and pivot the phone numbers like this:
Employee_audit
EmpId|UserID|EmailID|Landline|Mobile|Fax|Home|Updatedate
The one thing I would never do is have a single table with type and value columns. It seems attractive because it means we could track additional entities without any further DDL. But in fact it becomes harder to re-assemble the complete state of an Employee at any given time with each attribute we add. Also it means the auditing process itself is more complicated (because it needs to determine which attributes have changed and whether it needs to audit the change) and more expensive (because changing three attributes on the same record entails inserting three audit records).

Star Schema: How the fact table aggregations are performed?

https://web.stanford.edu/dept/itss/docs/oracle/10g/olap.101/b10333/globdiag.gif
Assume that we have a start schema as above..
My questions is - In real-time how do we populate the colums (unit_price, unit_cost) columns of the fact table..?
Can anyone provide me a start schema tables with real data?
I am having hard time in understanding star schema...
Please help!..
Start schema consists of two types of tables fact tables and dimensions.
The ideal of the star design is that you can split your data in two part.
The static part is described with dimensions and the dynamic part (= transactions) in the fact table.
Each transaction is stored in the fact table as a new record and is connected to the surrounding dimensions, that define the context of the transaction.
The example in link contains two fact tables: SHIPMENTS and PRODUCT_CONDITIONS.
Note that the fact tables in the link are dubbed UNITS_HISTORY_FACT and PRICE_AND_COST_HISTORY_FACT, but I find this not a best choice.
The SHIPMENTS table stores one record for each shipment of a PRODUCT to a CUSTOMER at some TIME, via a defined CHANNEL.
All the above information is defined using the corresponding keys of the respective dimensions.
The fact table also contains MEASURES describing the attributes of the transaction, here the number of UNITS shipped.
The structure of the fact table would be therefore
CUSTOMER_ID
PRODUCT_ID
TIME_ID
CHANNEL_ID
UNITS
The second fact table (bottom) is more interesting, because here you split the product in two parts:
PRODUCT dimension defining the ID, name and other more static attributes
PRODUCT_CONDITION this is fact table, designed with the expectation the price and cost of the product will change over time.
With each change of the price or cost insert a new record in the fact table and connect it to the PRODUCT and TIME (of change).
The structure of the fact table would be therefore
PRODUCT_ID
TIME_ID
UNIT_PRICE
UNIT_COST
Final note the the design of the TIME dimension.
The best practice to connect the fact table with the dimension tables is to use meaningless ID (surrogate keys), but for TIME dimension you should be careful. For big (time partitioned) fact table is often used the natural key (DATE format) to be able to deploy the partitioning features. See more details in How I Defined a Time Dimension Using a Surrogate Key and other resources in web.

Sql Server heavily queried Table - should I store secondary info (html text) in another table

The Overview:
I have a table "category" that is for the most part used to categorise products and currently looks like this:
CREATE TABLE [dbo].[Category]
(
CategoryId int IDENTITY(1,1) NOT NULL,
CategoryNode hierarchyid NOT NULL UNIQUE,
CategoryString AS CategoryNode.ToString() PERSISTED,
CategoryLevel AS CategoryNode.GetLevel() PERSISTED,
CategoryTitle varchar(50) NOT NULL,
IsActive bit NOT NULL DEFAULT 1
)
This table is heavily queried to display the category hierarchy on a shopping website (typically every page view) and can have a substantial number of items.
I'm using the Entity Framework in my data layer.
The Question:
I have a need to add what could potentially be a fairly large "description" which could come in the form of the entire contents of a web-page and I'm wondering whether I should store this in a related table rather than adding it to the existing category table given that the entity framework will drag the "description" column out of the database 100% of the time when 99.5% of the time I'll only want the CategoryTitle and CategoryId.
Typically I wouldn't worry about the overhead of the Entity Framework, but in the case I think it might be important to take it into consideration. I could work around this with a view or a complex type from a stored proc, but this means a lot of refactoring that I'd prefer to avoid.
I'm just interested to know if anyone has any thoughts, suggestions or a desire to slap my wrists in relation to this scenario...
EDIT:
I should add that the reason I'm hesitating to set up a secondary table is because I don't like the idea of adding an additional table that has a 1 to 1 relationship with the Category table - it seems somewhat pointless. But I'm also not a DBA so I'm not sure whether this is an acceptable practice or not.
You could put your column in the table and then create an index covering all other columns. That way the index will be used when you do all lookups you do with your current schema.
The key word for this construction is Covering Index: http://en.m.wikipedia.org/wiki/Database_index#Covering_index
I would store in a different table for the simple reason to not increase the size of a record in Category table. An increase in record size due to such a VARCHAR column will reduce the number of records that can fit a given disk page (typically of size 4KB), thereby increasing the number of pages to fetch to main memory to search, increasing the number of disk accesses, affecting the query execution times.
I would store this in a different table (i.e. vertically partition the category table into most frequently accessed columns and not-so-frequently used columns), and define a OneToOne relationship at the application layer with the entity that contains the not-so-frequently used column, as a member in the main Category entity, set the fetch type to LAZY.

How are application like twitter implemented?

Suppose A follows 100 person,
then will need 100 join statement,
which is horrible for database I think.
Or there are other ways ?
Why would you need 100 Joins?
You would have a simple table "Follows" with your ID and the other persons ID in it...
Then you retrieve the "Tweets" by joining something like this:
Select top 100
tweet.*
from
tweet
inner join
followers on follower.id = tweet.AuthorID
where
followers.masterID = yourID
Now you just need a decent caching and make sure you use a non locking query and you have all information... (Well maybe add some userdata into the mix)
Edit:
tweet
ID - tweetid
AuthorID - ID of the poster
Followers
MasterID - (Basically your ID)
FollowerID - (ID of the person following you)
The Followers table has a composite ID based on master and followerID
It should have 2 indexes - one on "masterID - followerID" and one on "FollowerID and MasterID"
The real trick is to minimize your database usage (e.g., cache, cache, cache) and to understand usage patterns. In the specific case of Twitter, they use a bunch of different techniques from queuing, an insane amount of in-memory caching, and some really clever data flow optimizations. Give Scaling Twitter: Making Twitter 10000 percent faster and the other associated articles a read. Your question about how you implement "following" is to denormalize the data (precalculate and maintain join tables instead of performing joins on the fly) or don't use a database at all. <-- Make sure to read this!

Resources