I need a MySQL guru help here. I have multithreading application that reads and updates every half a second in MySQL db all day…No inserts… what is the best engine for this application, innodb or Memory …
Note: SSD hard drive
I am looking for speed and Assuming hardware resources are not an issue
Thank you folks
Related
How much data can the free version of Datomic handle in terms of storage and throughput? As far as I know, the free transactor uses H2 to store a local embedded database.
What stops me from using this in production, ignoring the obvious lack of storage redundancy and limited number of peers (1)?
Datomic Pro starter edition is probably more of a go-to than the free edition since it supports all of the storage services as the licensed pro version. In terms of local storage and throughput, I'd say it can store up to the amount of space is available on the disk. The transactor handles all writes, so I'd imagine your only potential bottlenecks for throughput would be the hard disk or, if you're throwing tons of transactions at it, it would be waiting for the indexing jobs to complete as your data grows. Indexing can get computationally expensive over time and it's best to have GC and excision jobs in place if you're handling that much volume.
We would be using Virtuoso for storing RDFs, the triple count will be 100 million to start with. I need to know what should be typical RAM, CPU, Disk etc for this. Querying will be with SPARQL and there will be a bit complex queries.
Kindly provide your inputs.
The average size of a Virtuoso version 6.x triple (quad) is about 30bytes thus for 100 million triples you would need about 3GB RAM , this being the most critical component to enable the database working set to fit in memory , data does not need to be loaded from disk once the database is "warmed up", for best performance. This would be especially the case when running complex queries. In terms of disk, the fast they are the quicker the databaase can be loaded into memory, checkpoints performed etc. thus SSDs or similar devices are recommended where possible, espcially if memory is limited and reading data from disk at times in unavoidable. In terms of processor standard commodity 64bit processor available today would suffice, typically running on a Linux x86_64 system of your choice, as said memory is always the most critical component though.
See the following Virtuoso FAQ and peformance tuning documents for more details:
http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VirtRDFPerformanceTuning
http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/#FAQ
For example i have db with 20 GB of data and only 2 GB ram,swap is off. Will i be able to find and insert data? How bad perfomance would be?
it's best to google this, but many sources say that when your working set outgrows your RAM size the performance will drop significantly.
Sharding might be an interesting option, rather than adding more RAM..
http://www.mongodb.org/display/DOCS/Checking+Server+Memory+Usage
http://highscalability.com/blog/2011/9/13/must-see-5-steps-to-scaling-mongodb-or-any-db-in-8-minutes.html
http://blog.boxedice.com/2010/12/13/mongodb-monitoring-keep-in-it-ram/
http://groups.google.com/group/mongodb-user/browse_thread/thread/37f80ff39258e6f4
Can MongoDB work when size of database larger then RAM?
What does it mean to fit "working set" into RAM for MongoDB?
You might also want to read-up on the 4square outage last year:
http://highscalability.com/blog/2010/10/15/troubles-with-sharding-what-can-we-learn-from-the-foursquare.html
http://groups.google.com/group/mongodb-user/browse_thread/thread/528a94f287e9d77e
http://blog.foursquare.com/2010/10/05/so-that-was-a-bummer/
side-note:
you said "swap is off" ... ? why? You should always have a sufficient swap space on a UNIX system! Swap-size = 1...2-times RAM size is a good idea. Using a fast partition is a good idea. Really bad things happen if your UNIX system runs out of RAM and doesn't have Swap .. processes just die inexplicably.. that is a bad very thing! especially in production. Disk is cheap! add a generous swap partition! :-)
It really depends on the size of your working set.
MongoDB can handle a very large database and still be very fast if your working set is less than your RAM size.
The working set is the set of documents you are working on a time and indexes.
Here is a link which might help you understand this : http://www.colinhowe.co.uk/2011/02/23/mongodb-performance-for-data-bigger-than-memor/
Do ramdisks really improve vs2010 performance (general and build times)?
If so, what are all the steps I have to do to get the maximum benefit of it?
Can it also help resharper?
Thanks,
André Carlucci
in my experience RamDisk is slower than SSD for build. it can be even slower than HDD...
RAMdisk slower than disk?
so do not bother with RamDisk and buy Intel or Crucial SSD, but not OCZ.
EDIT:
After many tries I figured it out. When ramdisk is formatted as FAT32, then even though benchmarks shows high values, real world use is actually slower than NTFS formatted SSD. But NTFS formatted ramdisk is faster in real life than SSD. But I would not bother with Ramdisk anyway, SSD is everything you need.
So here is the issue that you will run into. Yes it can improve performance with some serious buts:
You stand the chance of losing all your work between syncs.
There is a noticeable lag when the ramdisk sync's to disc.
This will require you to setup proper sync times for how you work.
I'd recommend getting a SATA III Solid-State Drive and back it up weekly.
I have tried ram disk 1 or 2 years ago. I remember that the build time was about 30% faster.
That was too few to top the disadvantages as mentioned by Andrew Finnell.
I have 2 identical servers and currently I am monitoring both of them by setting up performance counters.
IIS 6 with .NET Framework 2
I am noticing that one server has high disk writes of ~3300 Writes/Sec and the other server has ~199 Writes/Sec
Anyone encountered the same problem?
What may cause high disk writes?
These servers are load balanced (50%-50%)
Thanks
Lots of things can cause high disk activity.
Swapping
There's lots of stuff being written to disk
You left ODBC tracing on (oops!)
...
Sounds like you're already using Performance Monitor; add some more counters to watch per process rather than systemwide:
Process | IO Writes / sec
Process | IO Written Bytes / sec
Process | Page File Writes / sec
I'm not sure about the Page File Writes per second (this is from memory) but there should be something like that in there. You should be able to isolate the high activity to a process and that should help you figure it out.
Multiple issue can cause high disk writes, one of them is paging.
is one of these server under high memory load