Thingworx query takes too long - performance

In the backend queries execute very fast, however the queries for the interface are extremely slow. Thingworx connects to a SQL DB and the database works just fine. Tried to reboot TWX, also to store the data directly in TWX but it is so much that it does not work properly. Interface is using mainly graphs that picture several parameters with their value in a 15 min interval.

Related

Fetching rows with Snowflake JDBC while the query is running in the server

I have a complex query that runs a long time (e.g 30 minutes) in Snowflake when I run it in the Snowflake console. I am making the same query from a JVM application using JDBC driver. What appears to happen is this:
Snowflake processes the query from start to finish, taking 30 minutes.
JVM application receives the rows. The first receive happens 30 minutes after the query started.
What I'd like to happen is that Snowflake starts to send rows to my application while it is still executing the query, as soon as data is ready. This way my application could start processing the rows in the first 30 minutes.
Is this possible with Snowflake and JDBC?
First of all, I would request to check the Snowflake warehouse size and do the tuning. It's not worth waiting for 30 mins when by resizing of the warehouse, the query time can be reduced one fourth or less than that. By doing any of the below, your cost will be almost the same or low. The query execution time will be reduced linearly as you increase the warehouse size. Refer the link
Scale up by resizing a warehouse.
Scale out by adding clusters to a warehouse (requires Snowflake
Enterprise Edition or higher).
Now coming to JDBC, I believe it behaves the same way as for other databases as well

1st access to Oracle SP is very slow, subsequent access seem fine

Not sure if this question has been already asked. I face this problem where the 1st hit from the website to an Oracle SP takes a lot of time. Subsequent accesses work just fine.
The SP i'm taking about here is a dynamic SP used for Search functionality(With different search criteria selection option available)
1st access time ~200 seconds
subsequent access time ~20 to 30 seconds.
Stored Procedure logic on a high level.
Conditional JOINS are appended based on some logics.
Dynamic SQL and cursor used to retrieve data.
Any help to start tackling these kind of issues is very helpful..
Thanks,
Adarsh
The reason why it takes only a few secs to execute the query after the first run is that Oracle caches the results. If you change the SQL then Oracle considers it a different query and won't serve the results from the cache but executes the new query (even formatting the code again or adding a space in between will be a difference).
It is a hard question how to speed up first execution. You'll need to post your query and explain plan and probably you'll have to answer further questions if you want to get help on that.

Entity Framework query performance

I have a problem with a quite complex query executed through Entity Framework that takes so much time, almost 50 seconds. The query is executed with an ad-hoc call to a web service which creates a new ObjectContext, execute the query and returns the result.
The problem is that if I trace with SQL Server Profiler the T-SQL code and try to execute it from SQL Server Management Studio it takes like 2 seconds... what could it be?
Thank you,
Marco
For every ObjectContext that touches the database, Entity does a lot of startup work building an internal representation of the database schema. This can take a long time (our project is about 30 seconds), and is rolled into the expense of the first query made against the database. Subsequent ones are plenty fast, until the process is restarted. Does that apply to you?

Best strategy for retrieving large dynamically-specified tables on an ASP.NET page

Looking for a bit of advice on how to optimise one of our projects. We have a ASP.NET/C# system that retrieves data from a SQL2008 data and presents it on a DevExpress ASPxGridView. The data that's retrieved can come from one of a number of databases - all of which are slightly different and are being added and removed regularly. The user is presented with a list of live "companies", and the data is retrieved from the corresponding database.
At the moment, data is being retrieved using a standard SqlDataSource and a dynamically-created SQL SELECT statement. There are a few JOINs in the statement, as well as optional WHERE constraints, again dynamically-created depending on the database and the user's permission level.
All of this works great (honest!), apart from performance. When it comes to some databases, there are several hundreds of thousands of rows, and retrieving and paging through the data is quite slow (the databases are already properly indexed). I've therefore been looking at ways of speeding the system up, and it seems to boil down to two choices: XPO or LINQ.
LINQ seems to be the popular choice, but I'm not sure how easy it will be to implement with a system that is so dynamic in nature - would I need to create "definitions" for each database that LINQ could access? I'm also a bit unsure about creating the LINQ queries dynamically too, although looking at a few examples that part at least seems doable.
XPO, on the other hand, seems to allow me to create a XPO Data Source on the fly. However, I can't find too much information on how to JOIN to other tables.
Can anyone offer any advice on which method - if any - is the best to try and retro-fit into this project? Or is the dynamic SQL model currently used fundamentally different from LINQ and XPO and best left alone?
Before you go and change the whole way that your app talks to the database, have you had a look at the following:
Run your code through a performance profiler (such as Redgate's performance profiler), the results are often surprising.
If you are constructing the SQL string on the fly, are you using .Net best practices such as String.Concat("str1", "str2") instead of "str1" + "str2". Remember, multiple small gains add up to big gains.
Have you thought about having a summary table or database that is periodically updated (say every 15 mins, you might need to run a service to update this data automatically.) so that you are only hitting one database. New connections to databases are quiet expensive.
Have you looked at the query plans for the SQL that you are running. Today, I moved a dynamically created SQL string to a sproc (only 1 param changed) and shaved 5-10 seconds off the running time (it was being called 100-10000 times depending on some conditions).
Just a warning if you do use LINQ. I have seen some developers who have decided to use LINQ write more inefficient code because they did not know what they are doing (pulling 36,000 records when they needed to check for 1 for example). This things are very easily overlooked.
Just something to get you started on and hopefully there is something there that you haven't thought of.
Cheers,
Stu
As far as I understand you are talking about so called server mode when all data manipulations are done on the DB server instead of them to the web server and processing them there. In this mode grid works very fast with data sources that can contain hundreds thousands records. If you want to use this mode, you should either create the corresponding LINQ classes or XPO classes. If you decide to use LINQ based server mode, the LINQServerModeDataSource provides the Selecting event which can be used to set a custom IQueryable and KeyExpression. I would suggest that you use LINQ in your application. I hope, this information will be helpful to you.
I guess there are two points where performance might be tweaked in this case. I'll assume that you're accessing the database directly rather than through some kind of secondary layer.
First, you don't say how you're displaying the data itself. If you're loading thousands of records into a grid, that will take time no matter how fast everything else is. Obviously the trick here is to show a subset of the data and allow the user to page, etc. If you're not doing this then that might be a good place to start.
Second, you say that the tables are properly indexed. If this is the case, and assuming that you're not loading 1,000 records into the page at once and retreiving only subsets at a time, then you should be OK.
But, if you're only doing an ExecuteQuery() against an SQL connection to get a dataset back I don't see how Linq or anything else will help you. I'd say that the problem is obviously on the DB side.
So to solve the problem with the database you need to profile the different SELECT statements you're running against it, examine the query plan and identify the places where things are slowing down. You might want to start by using the SQL Server Profiler, but if you have a good DBA, sometimes just looking at the query plan (which you can get from Management Studio) is usually enough.

Sql server vs MS Access performance

I have an existing application which uses Sql Server 2005 as a back-end. It contains huge records, I need to join tables which contain 50K-70K. Client m/c is a lower hardware.
So, can I improve its performance by using MS Access as a back-end? I also need to search operation on an Access file. So, which one is better for performance?
Querying Access is better than querying SQL in lower h/w?
Because SQL Server does run as a separate process, caches results, uses ram and processing power when not being queried, etc., IF the other computer has very little RAM or a very slow processor (or perhaps even more importantly a single-core processor), I could see a situation where SQL Server is actually SLOWER than MS Access use.
Without information about your hardware setup, approximately what percentage of your application relies on querying the database, etc., I'm not sure this question can be easily answered.
MS SQL Server 2005 Express requires at least 512 MB RAM (see http://www.microsoft.com/sqlserver/2005/en/us/system-requirements.aspx), so if your lower-end hardware doesn't have at least 512MB, I would certainly choose MS Access over SQL Server.
I should also add that you may want to consider SQLite (see http://www.sqlite.org/) which should be significantly less overhead than MS SQL Server. I'm not certain how it would stack up against MS Access use over something like Jet. My gut instinct is that it would perform better with less overhead.
70,000 records is really not that big for SQL server (or access for that matter). I would echo what has already been said and say that all things being equal SQL server will out perform Access.
I would go back to your query and look at the execution plan to see why it is so slow, maybe missing indexes, out of date statistics or a whole host of other reasons could explain your current performance problems.
SQL server also gives you the option of using materialised views to help with performance. The trade of is slower insert/update/delete performance but if you read more than you write it might be worth it.
I think Albert Kallal's comment is right, and the fact is that if you have a single-user app running on a single workstation (Access client with SQL Server running on the same workstation as a client), it will quite often be slower than if the setup on that workstation were Access client to Jet/ACE back end on the same machine. SQL Server adds a lot of overhead that delivers no benefit when there is no network in between the client and the SQL Server.
The performance equation flips when there's a network involved, even for a single-user app. If the Access client runs on a workstation, and the SQL Server on a server on the other end of a network connection (even a fast one), it will likely be faster than if the data is stored in a Jet/ACE file on a file server.
But it's not a given, in my opinion. It depends entirely on the engineering of the application and the excellence of the schema.
I've tried to use SQL Server Express 2005 VS MS Access 2010, many people said SQL Server would run faster than MS Access (I also thought so first). But what happened then made me surprised, run query with MS Access is faster than SQL Server with significant result (with the same data & structure because I'd converted from Access to SQL Server before).
But i don't know its performance in another processes like insert, update, and delete yet.
Local SQL Server Express 2014
1 Minute about 2200 records (2200x connect to DB and retrieve 1 record)
External SQL Server Express 2014 (different IP)
1 Minute about 2200 records (2200x connect to DB and retrieve 1 record)
External SQL Server 2000 (old server)
1 Minute about 10000 records (10000x connect to DB and retrieve 1 record)
Locale Access Database
1 Minute about 55000 records (55000x connect to DB and retrieve 1 record)
We are also surprised.
I'll answer the question directly, but first it is important to know a few things about Access and SQL.
In general, I have found that a small database with up to 10K records will perform equally well on both Access or SQL if all machines have reasonable hardware. Access has a benefit for simplicity for a small number of users, up to 4, but also has a size limitation of 2GB. So you need to be careful that the database size stays below this limit. Some databases start small, but then have a way of growing over time. Something to keep in mind when planning for the future of your program and/or database. If you are might approach the 2GB the limit, one option is to use Microsoft SQL Server 2014 Express edition which has a database size limit of 10GB. SQL Express is full SQL, but with size limitations. Full blown SQL Server 2014 has a max database size of 524PB (524,000,000GB). So it would be fair to say it has no practical limit.
If your database has more than 10K records and especially for larger databases of 100K records or more, SQL can demonstrate significant performance gains.
Some performance with MS Access can be achieved by using "Pass through queries" as can any program that uses SQL optimized queries.
Why? The answer comes from how the technology works under the hood. With Access, if it not using "Pass through queries" it will read an entire table, find which records it needs and then show the result. With a program using SQL optimized queries, the SQL engine returns just the results in a very efficient manner.
At the end of the day, if you have a small (<10K record) database used by up to 4 people, MS Access might make sense. If you have plans that the database could grow to more than 10K records or be used by more than 5 users, SQL would be the logical choice.
Specifically for the question posed about a 50-70K record database. I think if you have reasonable hardware, generally SQL will perform better, if you have a unique situation (such as lower hardware on the SQL server) a move to Access could see some improvements.
My take on this topic is that one should think of payload in terms of pickup truck versus an 18 wheeler. Better/worse/faster/slower somewhat misses the point. It is a matter of choosing the appropriate vehicle for the payload.
70k records is easily handled by today's PCs. So one may as well stick with the pickup truck, unless an organization already has an installed skill set of SQL Servers there would be no reason to use it for an on-premise Windows application of just 70k records. Obviously if it is a web/mobile app that requires a backend db technology then Access isn't a candidate.
SQL Server will always give you better performance because the query is executed on the server. Access on the back-end won't help because your client application will need to pull all the data from the tables, and then perform the join locally.
SQL Server has better indexing options... Filtered indexes, included columns, etc
There is ZERO chance that Access query is faster than a properly indexed SQL Server database query.

Resources