Why when using MS Access to query data from remote Oracle DB into local table takes so long - oracle

I am connecting to a remote Oracle DB using MS Access 2010 and ODBC for Oracle driver
IN MS Access it takes about 10 seconds to execute:
SELECT * FROM SFMFG_SACIQ_ISC_DRAWING_REVS
But takes over 20 minutes to execute:
SELECT * INTO saciq_isc_drawing_revs FROM SFMFG_SACIQ_ISC_DRAWING_REVS
Why does it take so long to build a local table with the same data?
Is this normal?

The first part is reading the data and you might not be getting the full result set back in one go. The second is both reading and writing the data which will always take longer.
You haven't said how many records you're retrieving and inserting. If it's tens of thousands then 20 minutes (or 1200 seconds approx.) seems quite good. If it's hundreds then you may have a problem.
Have a look here https://stackoverflow.com/search?q=insert+speed+ms+access for some hints as to how to improve the response and perhaps change some of the variables - e.g. using SQL Server Express instead of MS Access.
You could also do a quick speed comparison test by trying to insert the records from a CSV file and/or Excel cut and paste.

Related

jdbc template batch update with snowflake database is very slow

I have a spring boot application that connects to the Snowflake Database and uploads records (approx 50 columns of different data types). I am using
JdbcTemplate.batchUpdate(insertSql, values, types)
to do the bulk insert. Currently, it is consuming around 100 seconds for 50,000 records. I want to improve the batch performance. but not able to find an optimal solution.
I referred to and tried the solution mentioned in this post, but it didn't help at all. Any suggestions will be highly appreciated
I moved away from batch insert to snowflake copy command using JDBC. It is lightning fast. With the copy command, it is barely taking 2-3 seconds to load 50000 records from a CSV file with XS (extra small) size Dataware house.
Moreover, in case of error, messages are very clear and can be viewed in information_schema.load_history. Different file formats can be loaded and there are a variety of options to customize load process.
In my case, I am first loading the CSV file to the internal staging area (takes less than 1 sec), Run Copy command (takes 1-2 seconds), verifying load status in information_schema.load_history table (takes a few milliseconds) and proceed accordingly
This article was also helpful for running copy command with JDBC

Long Running Query on MSSQL

In my team, we need to connect to Oracle, Sybase and MSSQL very frequently... We use Oracle's SQLDeveloper 3.3.2 Version to connect all 3 (using third party libs). This tool often has a problem that select queries never ends... Even if we get the results, it will keep on running... And because of this we receive database alerts for long running queries...
E.g.
Select * from products
If products has million records, then SQLDeveloper will show top records but in background the query will keep on running.
How Can this problem be solved?
Or
Is there a better product which can fulfill our need.
Your query - select * from products - is asking the database engine to send millions of records to your client application (SQLDeveloper in this case).
While SQLDeveloper (and many other GUIs of a similar design) will show you the first 30 (or 50, or 100, etc) rows, as far as the database engine is concerned you're still asking to see millions of rows hence your query continues to 'run' in the database engine.
For example, in Sybase ASE the query will show up with a status of 'send sleep' meaning the database engine is waiting for the client application to request the next batch of records to send down the connection.
To 'solve' this issue you have a few options:
using SQLDeveloper: scroll through (ie, display on your monitor) the
rest of the multi-million row result set [likely not what you want to
do; likely you don't have the time/desire to hit the 'Next' button
100's of thousands of times]
kill off your query after you've received/viewed the first set of
records [not recommended as there will likely be times when you
'forget' to kill of your query, thus earning the wrath of your DBA]
write your query to pull back only the records you REALLY want/need to see (eg, add a WHERE clause to limit the set of rows)
see if SQLDeveloper has any sort of configuration option to
auto-kill any 'long running' queries [I have no idea if this is even
doable in a client application]
see if the DBA can configure your login with a resource limit (eg,
auto-kill queries if they run for more than XX seconds)

access-2013 report opens very slow

I've a weird problem here with a report which I use every day.
I've moved from XP to WIN-7 some time ago and use access 2013.
(Language is german, so sorry I can only guess how the modes are called in english)
"Suddenly" (I really can't say when this started) opening the report in "report-view" takes VERY long. Around 1 minute, or so. Then, switching to "page-view" and formatting the report takes only 2 or 3 seconds. Switching back to report-view, again takes 1 minute.
The report has a complex Query as datasource. (In fact, a UNION of 8 sub-queries) Opening the this query displays the data after 1 second which is ok.
All tables are "linked" from the same ODBC Datasource, which points to a mysql server on our network.
Further testing I opened every table the queries use, one after another. I noticed that opening these tables takes around 9 seconds for every single table. It doesn't matter if it's a small or big table. Always these 9 seconds.
The ODBC datasource is defined using the IP address of the server, not the name. So I consider it not being a nameserver problem / timeout/ ...
What could cause this slowdown on opening tables ????
I'm puzzeled..
Here are a few steps I would try:
Taking a fresh copy of the Access app running on one of those "fast clients" and see if that solves the issue
try comparing performance with a fast client after setting the same default printer
check the version of the driver on both machines, and if you rely on a DSN, compare them

SQLCE performance in windows phone very poor

I've writing this thread as I've fought this problem for three whole days now!
Basically, I have a program that collects a big CSV-file and uses that as input to a local SQLCE-database.
For every row in this CSV-file (which represents some sort of object, lets call it "dog"), I need to know whether this dog already exists in the database.
If it already exists, don't add it to the database.
If it doesn't exists, add a new row in the database.
The problem is, every query takes around 60 milliseconds (in the beginning, when the database is empty) and it goes up to about 80ms when the database is around 1000 rows big.
When I have to go thru 1000 rows (which in my opinion is not much), this takes around 70000 ms = 1 minute and 10 seconds (just to check if the database is up to date), way too slow! Considering this amount will probably some day be more than 10000 rows, I cannot expect my user to wait for over 10 minutes before his DB is synchronized.
I've tried to use the compiled query instead, but that does not improve performance.
The field which im searching for is a string (which is the primary key), and it's indexed.
If it's necessary, I can update this thread with code so you can see what I do.
SQL CE on Windows Phone isn't the fastest of creatures but you can optimise it:
This article covers a number of things you can do:WP7 Local DB Best Practices
They als provide a WP7 project that can be downloaded so you can play with the code.
On top of this article I'd suggest changing your PK from a string to an int; strings take up more space than ints so your index will be larger and take more time to load from isolated storage. Certainly in SQL Server searchs of strings are slower than searches of ints/longs.

Sql server vs MS Access performance

I have an existing application which uses Sql Server 2005 as a back-end. It contains huge records, I need to join tables which contain 50K-70K. Client m/c is a lower hardware.
So, can I improve its performance by using MS Access as a back-end? I also need to search operation on an Access file. So, which one is better for performance?
Querying Access is better than querying SQL in lower h/w?
Because SQL Server does run as a separate process, caches results, uses ram and processing power when not being queried, etc., IF the other computer has very little RAM or a very slow processor (or perhaps even more importantly a single-core processor), I could see a situation where SQL Server is actually SLOWER than MS Access use.
Without information about your hardware setup, approximately what percentage of your application relies on querying the database, etc., I'm not sure this question can be easily answered.
MS SQL Server 2005 Express requires at least 512 MB RAM (see http://www.microsoft.com/sqlserver/2005/en/us/system-requirements.aspx), so if your lower-end hardware doesn't have at least 512MB, I would certainly choose MS Access over SQL Server.
I should also add that you may want to consider SQLite (see http://www.sqlite.org/) which should be significantly less overhead than MS SQL Server. I'm not certain how it would stack up against MS Access use over something like Jet. My gut instinct is that it would perform better with less overhead.
70,000 records is really not that big for SQL server (or access for that matter). I would echo what has already been said and say that all things being equal SQL server will out perform Access.
I would go back to your query and look at the execution plan to see why it is so slow, maybe missing indexes, out of date statistics or a whole host of other reasons could explain your current performance problems.
SQL server also gives you the option of using materialised views to help with performance. The trade of is slower insert/update/delete performance but if you read more than you write it might be worth it.
I think Albert Kallal's comment is right, and the fact is that if you have a single-user app running on a single workstation (Access client with SQL Server running on the same workstation as a client), it will quite often be slower than if the setup on that workstation were Access client to Jet/ACE back end on the same machine. SQL Server adds a lot of overhead that delivers no benefit when there is no network in between the client and the SQL Server.
The performance equation flips when there's a network involved, even for a single-user app. If the Access client runs on a workstation, and the SQL Server on a server on the other end of a network connection (even a fast one), it will likely be faster than if the data is stored in a Jet/ACE file on a file server.
But it's not a given, in my opinion. It depends entirely on the engineering of the application and the excellence of the schema.
I've tried to use SQL Server Express 2005 VS MS Access 2010, many people said SQL Server would run faster than MS Access (I also thought so first). But what happened then made me surprised, run query with MS Access is faster than SQL Server with significant result (with the same data & structure because I'd converted from Access to SQL Server before).
But i don't know its performance in another processes like insert, update, and delete yet.
Local SQL Server Express 2014
1 Minute about 2200 records (2200x connect to DB and retrieve 1 record)
External SQL Server Express 2014 (different IP)
1 Minute about 2200 records (2200x connect to DB and retrieve 1 record)
External SQL Server 2000 (old server)
1 Minute about 10000 records (10000x connect to DB and retrieve 1 record)
Locale Access Database
1 Minute about 55000 records (55000x connect to DB and retrieve 1 record)
We are also surprised.
I'll answer the question directly, but first it is important to know a few things about Access and SQL.
In general, I have found that a small database with up to 10K records will perform equally well on both Access or SQL if all machines have reasonable hardware. Access has a benefit for simplicity for a small number of users, up to 4, but also has a size limitation of 2GB. So you need to be careful that the database size stays below this limit. Some databases start small, but then have a way of growing over time. Something to keep in mind when planning for the future of your program and/or database. If you are might approach the 2GB the limit, one option is to use Microsoft SQL Server 2014 Express edition which has a database size limit of 10GB. SQL Express is full SQL, but with size limitations. Full blown SQL Server 2014 has a max database size of 524PB (524,000,000GB). So it would be fair to say it has no practical limit.
If your database has more than 10K records and especially for larger databases of 100K records or more, SQL can demonstrate significant performance gains.
Some performance with MS Access can be achieved by using "Pass through queries" as can any program that uses SQL optimized queries.
Why? The answer comes from how the technology works under the hood. With Access, if it not using "Pass through queries" it will read an entire table, find which records it needs and then show the result. With a program using SQL optimized queries, the SQL engine returns just the results in a very efficient manner.
At the end of the day, if you have a small (<10K record) database used by up to 4 people, MS Access might make sense. If you have plans that the database could grow to more than 10K records or be used by more than 5 users, SQL would be the logical choice.
Specifically for the question posed about a 50-70K record database. I think if you have reasonable hardware, generally SQL will perform better, if you have a unique situation (such as lower hardware on the SQL server) a move to Access could see some improvements.
My take on this topic is that one should think of payload in terms of pickup truck versus an 18 wheeler. Better/worse/faster/slower somewhat misses the point. It is a matter of choosing the appropriate vehicle for the payload.
70k records is easily handled by today's PCs. So one may as well stick with the pickup truck, unless an organization already has an installed skill set of SQL Servers there would be no reason to use it for an on-premise Windows application of just 70k records. Obviously if it is a web/mobile app that requires a backend db technology then Access isn't a candidate.
SQL Server will always give you better performance because the query is executed on the server. Access on the back-end won't help because your client application will need to pull all the data from the tables, and then perform the join locally.
SQL Server has better indexing options... Filtered indexes, included columns, etc
There is ZERO chance that Access query is faster than a properly indexed SQL Server database query.

Resources