I had this question when I was given a task which made me think for a while. But not able to come up with more accurate or satisfying answer.
The task was something like this,
Things I already have
a table "User" which contain details about the user. Like createdTime, type(agent, admin, other), id, etc.,(This table contains too many rows(entries) in it)
The task given to me is
create a new table which will keep track of user which are deleted.
Then join the "User" table with this newly created table and show the user which are not already deleted and are of type='agent'.
Now my question is
"why are they asking me to create a new table instead of creating a new column(in User table) which will store a flag(true If user is deleted, else not deleted)
"Is join not time consuming?"
(creating a new column in the current "User" table will help in keeping the detail intact with the User.But creating a new table - where this will help?)
When I asked this question to my team member, he replied "If you create a new column, the value can be empty for a lot of rows, and space wastage" (what he said is right too.)
This states that he is caring about the space more and less on query time.
Nowadays we can get any amount of space with money, But speed is more important right. If the speed is less, then what is the use of anything which saves our disk space.
Shouldn't we care about time more and space less?.
I would like to know what you think about this. What is more important to you and why?.
I know this question can be down-voted But I wanted to know what most developers think in such case. What they care about more is it space or time?
Thanks for your time.
This will be an answer even though it's personally speaking because it's too big for a comment:
The two are rarely separate from each other but a function of each other.
Speed is also not just 'query time' - but basically execution time including data handling. From request to reply.
So if you have something that hogs space, you have more I/O and more memory usage and need to spend more time accessing data and more time transmitting data.
So my "answer" would be:
Treat both as equal and minimize both.
This states that he is caring about the space more and less on query time. :
No. That's a wrong assumption.
If you start thinking normalization and de-normalization in the database when thinking performance, you are most always better off normalizing data.
This is not only to save space, but also to save maintenance of data/data integrity (faster update, less locking); Indexing (space, yes - but also speed) and then when using the data I/O transfer from disk to server, to memory usage, across networks. Also - space usage means memory usage and the more things take up space, the more you want to also put into memory.
All these things leads to performance, aka speed.
The times you start thinking of demoralization in the database, you usually do it in connection with pre-calculating results and queries and utilize caching so you don't have to do the joins on demand. So while there are some situations where de-normalization is a plausible solution - more often than not, you're still better off normalizing the data.
Related
We're looking at implementing event sourcing / CQRS and for 95% of our system I can reason about the events and it doesn't scare me.
On the other hand, we also have a requirement where customers can insert data for millions of records in one go. A large portion of them can be updated in one go as they move location etc or have batch level details updated. It also needs to be reversed if they change their mind moments after.
Each record relates to a physical entity in the real world and it's important that the read model is updated quickly and the audit trail preserved at all costs for each record.
I can't seem to find any advice on how to handle these volumes. Are you supposed to write an event for every single record and action and just accept that it's going to be computationally / Database expensive? Are there any case studies that have similar requirements?
Any guidance is appreciated.
Are you supposed to write an event for every single record and action and just accept that it's going to be computationally / Database expensive?
A potentially helpful heuristic -- how would you do it with a version control system? Would you create an empty document, and then introduce a million commits, or would you have a single Data Imported commit, and go from there?
An important consideration to notice is that the authority for the data is somewhere else. "Physical entities in the real world" are not subject to the rules of your domain model; you you have there is a big pile of reference data.
It can help to think in processes -- what you have is an import reference data process, that has a relatively small number of immediate steps, and independently some "do interesting things with each record" which may turn out to be millions of little processes with some small number of events.
What happens when more than one user inserts data in Database (MySQL, Postgres) at exactly same time? How does it prioritize which record to be inserted first and which one later. If the answer is specific to application of program, I am asking in reference to web-applications.
In general, two things never happen at exactly the same time. There's a queue of work and at some level one thing always happens before the other.
However, there are cases where an overall transaction may take multiple steps -- and if two of these kinds of transactions begin at nearly the same time, they may overlap in time. This can cause problems.
For example, imagine a person buys something in a shopping cart and the steps include both creating an order record for them and decrementing and inventory count. If two people begin this process at nearly the same time, they could both potentially buy the item before the inventory is decremented to show the item out of stock.
In cases where things like this can occur, postgres (and other modern databases) provide ways to restrict for programs to protect themselves. These include both transactions and locking.
With transactions (see postgres docs here), groups of statements are run as a single unit -- and if one of the later steps fails, all steps are 'rolled back'. (For example, if decrementing inventory isn't possible because the item is now out of stock, the order creation can be rolled back.)
With locking (see postgres docs here), tables (or even individual rows in a table) are locked so that any other process wanting to access them either waits or is timed out. This would prevent two processes from updating the same data at nearly the same time.
In general, the vast majority of applications don't require either of these approaches. Unless you're working in an environment such as at a bank where the tables involved contain financial transactions, you probably won't have to worry about it.
It's never exactly the same time. One will happen before the other.
Which one will, unless you implement your own prioritisation mechanism, is indeterminate, and you should never rely on it.
As to what will happen, well that depends.
For two inserts to the same table, if data integrity is dependant on what order they are executed in your database design has a horrendous flaw.
For collisions (two updates to the same record for instance). There are two implementations.
Pessimistic locking. Assume there will be a significant number of updates to teh same data, so issue a lock around it. If The lock exists fail the update (e.g. second one if first hasn't finished) with some suitable message.
Optimistic locking. Assume collisions will rarely happen. Usual way of doing this is to add a timestamp field to the record which changes every update. So when you read the data you get the timestamp, and when you write the data you only do it, if the timestamp you have matches the one that's there now, and update said timestamp as part of it. If it does not match you do the "Someone else has changed this data message".
There is a compromise position, where you try and merge two updates. (for instance you change name and I change address). You need to really think about that though, it's messy, and get very complicated very quickly, and getting it wrong run's a real risk of messing up the data.
People with far larger IQs than mine spend a lot of time on this stuff, personally I like to keep it like me, simple...
Basically it's a financial database, with both daily and intraday data (date,symbol,open,high,low,close,vol,openinterest) -- very simple structure. Updates are just once a day. A typical query would be: date and close price of MSFT for all dates in DB. I was thinking that there's got to be something out there that's been optimized for lots of reads and not many writes, as opposed to a general-purpose RDBMS like MySQL. I searched rubyforge.org, and I didn't see anything that specifically addressed this (as far as I could tell).
MS SQL Server can be optimized like this with the fairly simple:
ALTER DATABASE myDatabase
SET READ_COMMITTED_SNAPSHOT ON
SQL Server will automatically cache your data in memory if it is being used heavily for reads.
You can always use a RAMdisk for your MySQL installation if your database footprint is small enough. One way to make your tables small enough to fit is to create them as MyISAM ARCHIVE tables. While they are very compact, compressed, they can only be appended to or read from, but not updated. (http://dev.mysql.com/tech-resources/articles/storage-engine.html)
Generally a properly indexed and well organized MySQL table is really fast, especially when using MyISAM, and even more so when loaded from memory. They key is in denormalizing the data as heavily as you can optimizing for your particular read scenarios.
For example, having a stock_id, date, price tuple is going to be fairly slow to sort and retrieve. If you have, instead, stock_id and a column with some serialized data, the retrieval time will be very quick.
Another solution that is likely faster is to push all the data into an alternative DBMS like Toyko Cabinet or something similar, especially if your data fits neatly into a key/value store.
Look at MySQL, but run the database from memory instead of disk. Depends on the size of your dataset and your budget, but you could then update memory from disk once a day, and have a very, very fast read time afterwards.
The best-known (to me at least!) time series database is Fame but it's expensive and I strongly doubt that there's anything like, say, an ActiveRecord implementation for it. Unless it's changed a lot in the 10 or so years since I last touched it, it isn't SQL-friendly at all.
With a fairly tightly-focused application, you can take a more flexible view of your data. For example, consider what is the information that you're actually looking to store? Is it the atomic price/hi/lo/close/vol/whatever, or is it more appropriately a time series of such values? If you always want to view the series, store a series per row, not a value.
Throwing a few ideas out here...
How might it look if you stored a year or a month of a single value for a single stock in one row? Maybe as an XML string, or JSON or something more terse of your own devising. Compressed CSV, perhaps? That ought to fit a month's values into a 255-character column. (Use something like Huffman coding to do the encoding, perhaps - a single dictionary ought to work for all instances of such similar data).
You can still hold a horizontal view as well: with the extremely low update rate you'll have (should only be data fixes, I'd guess) you can probably stand to build that stuff.
There's an obvious downside to this: you'll have a bunch of extra work to do.
I don't have any personal experience, but MogoDB claims to offer relational-style flexibility with key-value performance.
As mentioned elsewhere key-value database might be worth looking at: Tokyo Cabinet, CouchDB or one of the others again, perhaps, with concatenated value for the time series.
Here's another one I've been thinking about lately.
We have concluded in earlier discussions : 'natural primary keys are bad, artificial primary keys are good.'
Working with Hibernate earlier I have seen that Hibernate default creates one sequence for all tables. At first I was puzzled by this, why would you do this. But later I saw the advantage that it makes linking parents and children fool proof. Because no tables have the same primary key value, accidentally linking a parent with a table that is not a child gives no results.
Does anyone see any downsides to this approach. I only see one : you cannot have more than 999999999999999999999999999 records in your database.
There could be performance issues with all code getting values from a single sequence - see this Ask Tom thread.
Depending on how sequences are implemented in the database, always hitting the same sequence can be better or worse. When only a few or only one thread request new values, there will be no locking issues. But a bad implementation could cause congestion.
Another problem is rolling back transactions: Sequences don't get rolled back (because someone else might have requested a higher value already), so you can have large gaps which will eat your number space much more quickly than you might expect. OTOH, it will take some time to eat 2 or 4 billion IDs (if you "only" use 32 bit (signed) ints), so it's rarely an issue in practice.
Lastly, you can't easily reset the sequence if you have to. But if you need to have a restarting sequence (say, number of records since midnight), you can tell Hibernate to create/use a second sequence.
A major advantage is that you can uniquely identify objects anywhere in the DB just by the ID. That means you can severely cut down the log information you write in the production system and still find something if you only have the ID.
I prefer having one sequence per table. This comes from one general observation: Some tables ("master tables") have a relatively small row count and have to be kept "forever". For example, the customer table in an ERP.
In other tables ("transaction tables"), many rows are generated perpetually, but after some time, those rows can be archived (or simply deleted). The most extreme example is a tracing table used for debugging purposes; it might grow by hundreds of rows per second, but each row is obsolete after a few days.
Small IDs in the master tables make it easier when working directly on the database, e.g. for debugging purposes.
select * from orders where customerid=415
vs
select * from orders where customerid=89461836571
But this is only a minor issue. The bigger issue is cycling. If you use one sequence for all tables, you simply cannot let it restart. With one sequence per table, you can restart the sequences for the transaction tables when you have archived or deleted the old data. Master tables hardly ever have that problem, since they grow much slower.
I see little value in having only one sequence for all tables. The arguments told so far do not convince me.
There are a couple of disadvantages of using a single sequence:-
reduced concurrency. Handing out the next sequence value involves synchronisation. In practice, I do not think this is likely to be a big problem
Oracle has special code when maintaining btree indexes to detect monotonically increasing values and balance the tree approriately
The CBO might have a better time estimating range queries on the index (if you ever did this) if most values were filled in
An advantage might be that you can determine the order of inserts amongst different tables.
Certainly there are pros and cons to the one-sequence versus one-sequence-per-table approach. Personally I find the ability to assign a truly unique identifier to a row, making each id column a uuid, to be enough of a benefit to outweigh any disadvantages. As Aaron D. succinctly writes:
you can uniquely identify objects anywhere in the DB just by the ID
And, for most applications, due to the way Hibernate3 batches IMPORT statements, this will not be a performance bottleneck unless massive amounts of records are vying for the same db resource (SELECT hibernate_sequence.nextval FROM dual).
Also, this sequence mapping is not supported in the latest release (1.2) of Grails. Though it was supported in Grails 1.1 (!). It now requires subclassing one of the Hibernate dialect classes as a workaround.
For those using Grails/GORM, have a look at this JIRA entry:
Oracle Sequence mappings ignored
We have a table with, say, 5 indices (one clustered).
Question: will it somehow negatively affect optimizer performance - either speed or accuracy of index picks - if all 5 indices start with the same exact field? (all other things being equal).
It was suggested by someone at the company that it may have detrimental effect on performance, and thus one of the indices needs to have the first two fields switched.
I would prefer to avoid change if it is not necessary, since they didn't back up their assertion with any facts/reasoning, but the guy is senior and smart enough that I'm inclined to seriously consider what he suggests.
NOTE1: The basic answer "tailor the index to the where clauses and overall queries" is not going to help me - the index that would be changed is a covered index for the only query using it and thus the order of the fields in it would not affect the IO amount. I have asked a separate SO question just to confirm that assertion.
NOTE2: That field is a date when the records are inserted, and the table is pretty big, if this matters. It has data for ~100 days, about equal # of rows per date, and the first index is a clustered index starting with that date field.
The optimizer has to think more about which if any of the indexes to use if there are five. That cost is usually not too bad, but it depends on the queries you're asking of it. In principle, once the query is optimized, the time taken to execute it should be about the same. If you are preparing SELECT statements for multiple uses, that won't matter much. If every query is prepared afresh and never reused, then the overhead may become a drag on the system performance - particularly if it turns out that it really doesn't matter which of the indexes is actually used for most queries (a moderately strong danger when five indexes all share the same leading columns).
There is also the maintenance cost when the data changes - updating five indexes takes noticably longer than just one index, plus you are using roughly five times as much disk storage for five indexes as for one.
I do not wish to speak for your senior colleague but I believe you have misinterpreted what he said, or he has not expressed himself explicitly enough for you to understand.
One of the things that stand out about poorly designed, and therefore poorly performing tables are, they have many indices on them, and the leading columns of the indices are all the same. Every single time.
So it is pointless debating (the debate is too isolated) whether there is a server cost for indices which all have the same leading columns; the problem is the poorly designed table which exposes itself in myriad ways. That is a massive server cost on every access. I suspect that that is where your esteemed colleague was coming from.
A monotonic column for an index is very poor choice (understood, you need at least one) for an index. But when you use that monotonic column to force uniqueness in some other index, which would otherwise be irrelevant (due to low cardinality, such as SexCode), that is another red flag to me. You've merely forced an irrelevant index to be slightly relevant); the queries, except for the single covered query, perform poorly on anything beyond the simplest select via primary key.
There is no such thing as a "covered index", but I understand what you mean, you have added an index so that a certain query will execute as a covered query. Another flag.
I am with Mitch, but I am not sure you get his drift.
Last, responding to your question in isolation, having five indices with the leading columns all the same would not cause a "performance problem", beyond that which your already have due to the poor table design, but it will cause angst and unnecessary manual labour for the developers chasing down weird behaviour, such as "how come the optimiser used index_1 for my query but today it is using index_4?".
Your language consistently (and particularly in the comments) displays a manner of dealing with issues in isolation. The concept of a server and a database, is that it is a shared central resource, the very opposite of isolation. A problem that is "solved" in isolation will usually result in negative performance impact for everyone outside that isolated space.
If you really want the problem dealt with, fully, post the CREATE TABLE statement.
I doubt it would have any major impact on SELECT performance.
BUT it probably means you could reorganise those indexes (based on a respresentative query workload) to better serve queries more efficiently.
I'm not familiar with the recent version of Sybase, but in general with all SQL servers,
the main (and almost) only performance impact indexes have is with INSERT, DELETE and UPDATE queries. Basically each change to the database requires the data table per-se (or the clustered index) to be updated, as well as all the indexes.
With regards to SELECT queries, having "too many" indexes may have a minor performance impact for example by introducing competing hard disk pages for cache. But I doubt this would be a significant issue in most cases.
The fact that the first column in all these indexes is the date, and assuming a generally monotonic progression of the date value, is a positive thing (with regards to CRUD operations) for it will keep the need of splitting/balancing the index tables to a minimal. (since most inserts at at the end of the indexes).
Also this table appears to be small enough ("big" is a relative word ;-) ) that some experimentation with it to assert performance issues in a more systematic fashion could probably be done relatively safely and easily without interfering much with production. (Unless the 10k or so records are very wide or the query per seconds rate is high etc..)