microsoft access query speed - performance

I am now writing a report about MS Access and I can't find any information about its performance speed in comparison to other alternatives such as Micorsoft SQL Server, MySQL, Oracle, etc... It's obvious that MS Access is going to be the slowest among the rest, but there is no solid documents confirming this other than forums threads, and I don't have the time and resources to do the research myself.

Access isnt always the slowest. For fairly simple queries with one user, it is actually quite fast.
but throw a few extra users in there, or use complex joins and it will fall apart on you.
Here is what I could find quickly:
http://blog.nkadesign.com/2009/access-vs-sql-server-some-stats-part-1/
http://www.linuxtoday.com/news_story.php3?ltsn=2001-07-27-006-20-RV-SW
http://swik.net/MySQL/MySQL+vs+MS+SQL+Server
Oddly enough, few people compare access to the "real" databases since the user limit is such a limiting factor.

Here is Microsoft's reasons to upgrade to SQL Server from Access:
http://office.microsoft.com/en-us/access-help/move-access-data-to-a-sql-server-database-by-using-the-upsizing-wizard-HA010275537.aspx

I've actually seen an Access address book of over 3 million records performing very very fast while being used by hundreds of users. This is however an exception. Access databases decrease in performance and stability as soon as the database is modified while in use and especially if it is modified by more than a couple of users.

yes, Access (Jet) is slow. It's best to move to Access Data Projects- this allows you to use Access forms and reports with SQL Server, a REAL database.
SQL Server is the most popular database anywhere.. has been installed more often than any other database, period.

Related

Couchbase and NoSQL DBs for a serious Web-Application?

I read some posts here, but a real answer I didnt find.
Normally I work and worked with normal SQL Databaeses (MS SQL, MySQL), when I developed applications (ERP, CRM, PPS, Web Shops etc.). A real contact/experience with document-oriented databases in real business was not possible.
Only in a private sector (hobby, experimental projects) I tested MongoDB and CouchDB. The experience was good, but not enough to say "Yes, let it use for business!", because I could not test it in a real environment.
But now, there is a chance to program from zero, which could be a big start for a business.
So my question:
Can I use Couchebase for a big business application, where thousands users would use it. Is it so fast and with good performance to handling thousnds of queries, requests/reposts etc.?
How looks like with backup and restore?
Where is the limit of couchbase?
Thank you for the anwser.
In short, yes.
Your questions are too broad to fully address here. Couchbase has many real-world installations with clients doing production work at large scale. You can see several references with write ups of their uses on the Couchbase site. (Note this is not a complete list of customers, only ones that have agreed to have their use highlighted.) You will definitely recognize some names.

monetdb - anyone uses it in production?

I am very interested in using monetdb as a datamart, holding some huge data tables for querying and reporting
However, after some searching, I am unable to find any online posts / blogs regarding their use of Monetdb in any kind of production capacity.
Also, there seems to be little or next to no activity online regarding Monetdb.
Is this a bad sign for the future of Monetdb ?
I am very interested in using monetdb as a datamart, holding some huge data tables for >querying and reporting
My boss is also interested in MonetDB and I had the same reaction as you. No one is writing about MonetDB... is no one using MonetDB?
Regardless, I have been running performance tests on datasets of 500,000 to 1,000,000 records comparing MonetDB (column-oriented dbms) vs. MySQL (row-oriented dbms) and MonetDB beats MySQL in all regards- even in bulk inserts... which hypothetically it should not be as good at.
I can't speculate as to what all this means for MonetDB's future, but while it's around you might want to check it out because it performs well.
(I run Windows 7 and am communicating with each database using PHP)
I react a bit late to this post, but I'd like to add my voice to the ones using MonetDB in a production environment. We use it as the back-end of Spinque, a framework for designing complex search solutions. I've been using MonetDB for about 10 years, but only in the past 3 years in a production environment. Clearly, it has pros and cons and bugs like all other products, but it is being developed and improved very actively (I don't understand the low-activity signs that you refer to). If you want a DB that allows you to be ahead of the market standards, it's a good choice. Otherwise, just go for MS SQL ;)
I've been evaluating it lately for a client so I've had some time with it. My impression at this point is that it is just finishing "growing up" from being an academic experimental playground. It clearly has yet to be really discovered, though it does have some rough edges which might hinder certain applications.
As I write, I'm in the process of trying to load over 100 million rows into an instance (at 27mil presently). So far, it performs startlingly well in some areas (aggregates), but is oddly sluggish in others (most joins I've tried so far); that said, I've not yet run the recommended sampling process yet and I'm forcing it to live in just a single service with 32GB RAM.
I've found a few little glitches and one thing that caused a full service crash (obscure and reported), but I'm thinking that for many applications MonetDB could be just the ticket. Columnar storage (rather than NoSQL) seems to be the future IMO.
I'll update this if I find anything particularly interesting.
MonetDB is first and for all a research system, but has progressed far beyond the level of the average research prototype. It is the (only) relational column-store platform in open source that I know of that supports full SQL. I have used it myself at CWI in many research projects that are not core DB research, but do need advanced DB technology.
You can see on the user's mailing list that deployments happen in many different organisations. As Roberto Cornacchia stated in a different answer, it is the backend of all Spinque deployments and we are happy MonetDB users. MonetDB is also used at a variety of non-profit projects like open streetmap and open kvk.
More and more commercial parties deploy MonetDB for analytics. (They do not always like to advertise that their analyses depend on an open source system.) Recently, MonetDB Solutions has started to provide dedicated commercial support for these deployments.
We have been using MonetDB in our business. We analyse very large data sets with many millions of rows. Traditional methods of data warehousing on SQL databases became so slow. The problem we were facing was that the data was only going to get bigger! The only way forward was to go columnar.
The results have been amazing. When you have very few joins it is staggeringly quick. Even with joins on the data sets we are looking at it is still frightening how fast it comes back.
Having seen some of the commercial partnerships I think MonetDB is going to boom over the next few years. I believe some of the major BI suppliers are using Monet under their hood to perform the large data work.

SQLite for client-server

I've seen a couple of SQLite performance questions here on Stackoverflow, but the focus was on websites, and I'm considering using this DB in a client-server scenario:
I expect 1-10 clients for one server for now, could go up to 50 or more in the future.
slightly more reads than writes
the DB would sit behind a server process (i.e: not using direct DB access through a network)
Would using SQLite make the app less responsive as opposed to using PostgreSQL? My intuition tells me that it should be ok for these loads, but maybe someone has some practical experience with this kind of scenario.
I did use SQLite for a major client/server product used with ~10 concurrent users and I deeply regret that decision. In my opinion - PostgreSQL is much more suitable for client/server scenarios than SQLite due to its fine locking granularity.
You simply can't get very far when the entire database is locked whenever someone needs to write something ..
I like SQLite very much (I even wrote a commercial utility for comparing SQLite databases - SQLite Compare but I don't think it fits the bill when you have client/server scenarios.
Even SQLite's author says that it should be used as a replacement for custom file formats and not as a full blown database server. I wish I took his advice more seriously..
You didn't mention what operating system and Postgres versions you are using. However, before considering change of database engine, try to do some logging and benchmarking your current database with typical usage, then optimize "heaviest" questions. And maybe your backend processing load makes DB question time irrelevant? As SQLite is a file-based DBMS, concurrent access from multiple processes will degrade performance when client number grows up (edited after comment)
Following question may be helpful: How Scalable is SQLite?
I would confirm to S.Lott's answer.
I dont know how SQLite performs in comparison to PostgreSQL, since I don't know any newer meassurements, but my own experience with SQLite in a rather similar environment is rather good.
The only thing that might cause troubles in my view is that you have rather many writes. But it all depends on the total number per second I would say.
Also your setting to have one server process is optimal for SQLite in my opinion -- so you circumvent its weakness in multi-tasking.

Would SQLite be a 'better' choice for Joomla than MySQL, if it would be available?

Since this doesn't touch a real problem of mine I'm somwhat uncertain, if it is even worth to be asked here. However maybe some of you would like to share your opinion on that.
In general I have to admit, that 'better' means anything and nothing at all at the same time. So I probably should be more specific, but I tried not to overflow the topic. In a regular hosted environment on one of those cheap webhosters (like Dreamhost), with around 1000 articles in Joomla, a couple of users and a few hundreds visitors a day, would a SQLite database with a persistent connection (sqlite_popen) perform noticeable faster than the MySQL equivalent (with the TCP/IP overhead etc.)?
Or in short: Would it be wise to call Joomla to support SQLite?
I have never used sqlite on a website, but I have used it extensively for other purposes and I quite like it. The truth is, you won't know till you try. If you try, I reccomend creating a db abstraction layer first so that you can easily swap in other db's.
The downside to sqlite is that it's not really meant to be a multi user database. If you rarely write to the db, but do lots of reading, sqlite will probably be fine. If you find that you need multiple processes writing to the same db, I believe sqlite uses file level locking to maintain database consistency.. So, if all you're tables are in the same file, you'll lock the whole file while it's being written to even if another process wants to modify a completely different table.
In my opinion it's not the big multi user databases of the world that should be worried about competition from sqlite... It's all the regular files out there (and there custom file formats) that applications create and use that should be shaking in their boots about sqlite...
Linux ISPs for whatever reason seem to have settled on MySQL. This is what they offer and you will lock yourself to a limited number of service providers if you wander outside the norm.

Finding performance bottlenecks in a classic asp/sql server website

I've got an old classic asp/sql server app which is constantly throwing 500 errors/timeouts even though the load is not massive. Some of the DB queries are pretty intensive but nothing that should be causing it to fall over.
Are there any good pieces of software I can install on my server which will show up precisely where the bottlenecks are in either the asp or the DB?
Some tools you can try:
HP (formerly Mercury) LoadRunner or Performance Center
Visual Studio Application Center Test (Enterprise Editions only?)
Microsoft Web Application Stress tool (aka WAS, aka "Homer"; predecessor to Application Center Test)
WebLoad
MS Visual Studio Analyzer if you want to trace through the application code. This can show you how long the app waits on DB calls, and what the SQL was that was used. You can then use the SQL profiler to tune the queries.
Where is the timeout occurring? Is it at lines when ASP is connecting/executing sql? If so your problem is either with the connection to the db server or at the db itself. Load up SQL profiler in MSSQL to see how long the queries take. Perhaps it is due to locks in the database.
Do you use transactions? If so make sure they do not lock your database for a long time. Make sure you use transactions in ADO and not on the entire ASP page. You can also ignore lock in SQL Selects by using WITH (NOLOCK) hint on tables.
Make sure you database is optimized with indexes.
Also make sure you are conencted to the DB for as shortest time as possible i.e (example not working code): conn.open; set rs = conn.execute(); rs.close; conn.close. So store recordsets in a variable instead of looping through while holding the connection to the DB open. A good way is to use GetRows() function in ADO.
Always explicitly close and set ADO objects to nothing. This can cause the connection to the DB to remain open.
Enable connection pooling.
Load ADO constants in global.asa if you are using them
Do not store any objects in session or application scopes.
Upgrade to latest versions of ADO, MDac, SQL Server service packs etc.
Are you sure the server can handle the load? Maybe upgrade it? Is it on shared hosting? Maybe your app is not the problem.
It is quite simple to measure a script performance by timing it from the 1 line to the last line. This way you can identify slow running pages.
Have you tried running the SQL Server Profiler on the server? It will highlight any unexpected activity hitting the database from the app as well as help identifying badly performing queries.
If you're happy that the DB queries are needfully intensive then perhaps you need to set more appropriate timeouts on those pages that use these queries.
Set the Server.ScriptTimeout to something larger, you may also need to set the timeout on ADO Command objects used by the script.
Here's how I'd approach it.
Look at the running tasks on the server. Which is taking up more CPU time - SQL server or IIS? Most of the time, it will be SQL server and it certainly sounds that way based on your post. It's very rare that any ASP application actually does a lot of processing on the ASP side of things as opposed to the COM or SQL sides.
Use SQL Profiler to check out all the queries hitting the database server.
Deal with the low-hanging fruit first. Generally you will have a few "problem" queries that hit the database frequently and chew up a lot of time. Deal with these. (A truism in software development is that 10% of the code chews up 90% of the execution time...)
In addition to looking at query costs with SQL Profiler and Query Analyzer/SQL Studio and doing the normal SQL performance detective work you might also want to check if your database calls are returning inordinate amounts of data to your ASP code. I've seen cases where innocuous-looking queries returned HUGE amounts of unneeded data to ASP - the classic ("select * from tablename") kind of query written by lazy/inexperienced programmers that returns 10,000 huge rows when the programmer really only needed 1 field from 1 row. The reason I mention this special case is because these sorts of queries often have low execution times and low query costs on the SQL side of things and can therefore slip under the radar.

Resources