Access query calculation Vs. Excel calculation where access query is data source [closed] - performance

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Setup is an Access back-end on a network drive with a query as data source for an Excel table. I want to know if it is better to perform complex calculations in Excel after the data has been imported vs having the calculation in the query itself.
For example:
The db collects quality control information with individual records for every component of a lot. One calculation checks that all components of each lot have been recorded and if so checks that the most recent component has been entered before the scheduled completion time.
Obviously this is a fairly intensive calculation in excel which leads to significant calculation time after the data has been imported. (It's very possible that the calculation isn't as efficient as it could be!!)
So what I'd like to know is if the access query would be more or less efficient at doing this calculation (bearing in mind that the file is on a network drive).
I hope all that makes sense but if not let me know and I will try to clarify!
Thanks.

There is not a general rule for which platform will be faster. It depends on the size of the task and the method chosen to implement it.
MSAccess is absolutely great at collating information, but that is because it is flexible and systematic, which helps prevent errors. Not because collating information is fast. There is no general rule that says collating information will be faster in MSAccess, Excel, SQL Server or C#.
If you are using a code loop to compare all cells, that can take a long time however you do it. Post the code here to see if there is a suggestion on how to convert it to calculated cell expressions. To make Excel fast, you need to use calculated cell expressions.
If you aren't using a code loop, are you sure you aren't actually waiting for the database access?

Related

Website Performance Issue [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 10 months ago.
Improve this question
If a website is experiencing performance issues all of a sudden, what can be the reasons behind it?
According to me database can one reason or space on server can be one of few reasons, I would like to know more about it.
There can be n number of reasons and n depends on your specification
According to what you have specified you can have a look at,
System counters of webserver/appserver like cpu, memory, paging, io, disk
What changes you did to application if any, were those changes performance costly i.e. have a round of analysis on those changes to check whether any improvement is required.
If system counters are choking then check which one is bottleneck and try to resolve it.
Check all layers/tiers of application i.e. app server, database, directory etc.
if database is bottleneck then identify costly queries and apply indexes & other DB tuning
If app server is choking then, you need to identify & improve the method which is resource heavy.
Performance tuning is not a fast track process, it takes time, identify bottlenecks and try to solve it and repeat the process until you get desired performance.

Would Hadoop help my situation? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am in the process of creating a survey engine that will store millions of responses to various large surveys.
There are various agencies that will have 10-100 users each. Each will be able to administer a 3000+ question survey. There will be multiple agencies as well.
If each agency was to have hundreds of thousands of sessions each with 3000+ responses, I'm thinking that hadoop would be a good candidate to get the sessions and their response data to run various analyses on (aggregations etc).
The sessions, survey questions, and responses are all currently held in a sql database. I was thinking that I would keep that and put the data in parallel. So when a new session is taken under an agency, it is then added to the hadoop 'file', such that when the entire dataset is called up it would be included.
Would this implementation work well with hadoop or am I still well within the limits of a relational database?
I don't think anyone is going to be able to tell you definitively, yes or no here. I also don't think I fully grasp what your program will be doing from the wording of the question, however, in general, Hadoop Map/Reduce excels at batch processing huge volumes of data. It is not meant to be an interactive (aka real-time) tool. So if your system:
1) Will be running scheduled jobs to analyze survey results, generate trends, summarize data, etc.....then yes, M/R would be a good fit for this.
2) Will allow users to search through surveys by specifying what they are interested in and get reports in real-time based on their input....then no, M/R would probably not be the best tool for this. You might want to take a look at HBase. I haven't used it yet, but Hive is a query based tool but I am not sure how "real-time" that can get. Also, Drill is an up and coming project that looks promising for interactively querying big data.

Suggestions for an Oracle data modeler that can reverse engineer and handle very large databases [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
By very large, I mean on the realm of thousands of tables. I've been able to use Toad Data Modeler to do the reverse engineering, but once it loads up the txp file it creates, it just croaks. Even attempting to split up the model isn't possible as TDM just sits, frozen.
So, I was wondering what other options are out there, and perhaps if there are any 32 bit applications that can handle such a database model (considering the memory used by this is ~750MB, I would think it not too large for a 32 bit computer with max RAM).
Also to note, I am not trying to create a diagram with this (such a huge diagram would be effectively useless unless you already knew the system), but am instead needing to export the design of the database. So the data model tool doesn't need to support any sort of fanciful graphics, which may not be possible with the given size anyways.
Edit:
I've found a potential solution which leads to TDM working. You have to close the project, close TDM, open TDM, and then open the project. If you just kill the process while it is frozen, this will not work. What this does is zoom the screen showing the graphical representation to the normal view level, while normally after reverse engineering, the entire database is put into the view (if you just kill the process, when you open up the file again, you will see the entire database). While I am not certain the details, it appears being zoomed it makes it so that TDM runs much smoother and does not freeze or crash, and as such I am able to keep working in it to do what I need.
How about Oracle's own SQL Developer Data Modeler?

refactor old webapp to gain speed [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
4 years ago, I've built a webapp which is still used by some friends. the problem with that app, is that now it has a huge database, and it loads very slow. I know that is just my fault, mysql queries are mixted all over the places(even in the layout generation time).
ATM I know some about OO. I'll like to use this knowledge in my old app, but I don't know how to do it without rewriting all the from the beginning. Using MVC for my app, is very difficult at this moment.
If you were in my place, or if you will had the task to improve the speed of my old app, how you will do it? Do you have any tips for me? Any working scenarios?
It all depends on context. The best would be to change the entire application, introducing best practices and standards at once. But perhaps would be better to adopt an evolutionary approach:
1- Identify the major bottlenecks in the application using a profiling tool or load test.
2 - Estimate the effort required to refactoring each item.
3 - Identify the pages for which performance is more sensitive to the end user.
4 - Based on the information identified create a task list and set the priority of each item.
Attack one prolem at a time, making small increments. Always trying to spend 80% of your time solving the 20% more critical problems.
Hard to give specific advice without a specific question, but here are some general optimization/organization techniques:
Profile to find hot spots in your code
you mention mysql queries being slow to load, try to optimize them
possibly move data base access to stored procedures to help modularize your code
look for repeated code and try to move it to objects one piece at a time

What applications are there that I can pass data as it's generated and have it analyze some statistics for? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
The basic requirement is pass to some command type and execution time (possibly other data as well, but that's the basic data we're concerned with at the moment) from C# code (either managed code or something that can take data periodically from the command line. and perform some statistical analysis on it: avg time for each command type, standard deviation, some charts would be nice, etc.
Something that can do this in real time might be preferable, but I guess it's also acceptable to save the data ourselves and just pass it in to be analyzed.
We could write up something for this, but it seems like there should probably be something out there for this.
Edit: Basically looking for low learning curve and able to do what's mentioned above. Basically something that would be faster to learn and use than coding it manually.
I may be off base here, but would custom Windows perfmon objects and counters do this? You could create an object for each command type, with a counter for execution time, then use Perfmon's logging, charting and reporting facilities. Or export the Perfmon data to Excel/Access for fancier stuff.

Resources