I have two different users however one user is taking more than 2 mins to get the results from the view and have only 47 records and second in taking 0.48 sec to get the records from the same view and have records in thousands. How is this possible? Both are running in same view, same environment. Please guide what steps need to be follow to get this analysis done.
Related
In my current task, I'm trying to insert 100 users with approx 20 properties. Logger shows me EF Core executing all these insertion processes by dividing to 4 different queries and each query execution takes up to 100ms. Even though all queries are executed under 1 second, it takes around 10 seconds application code to step over SaveChanges.
Things that are considered and implemented:
There is only a single SaveChanges call.
There are no additional relations with the user object. Single entity, single table.
All mapping validated couple times to match entity property types with column types.
For a low record number such as 100, this is unacceptable as you may agree.
Which direction should I look at to understand the underlying problem?
Thanks in advance.
i am working in the hive benchmarking https://github.com/hortonworks/hive-testbench
i have a problem in loading data to tables. in face the TPC-ds data generator generates the data and then tryes to load them in tables but in table 17 out of 24 stops working and does nothing! i have tried several times and i gave so many time to it to be completed but it looks like it has stuck in this step, and nothing happens. plaese guide me what should i do for that?
and i cant run my queries because of missing some tables.
i am using azure with 8 core 28G ram
enter image description here
I tried this step several time and after checking the tables I understood the tables are populated with sample data but because of a problem in shell the process stops in number 17 . just takes 30 minute after the 17th job and all tables will populate.but the shell will be still freeze.
I built up a map containing this logic:
SOURCES -> SORTER -> AGG(FIRST BY GROUP) -> 2 LOOKUPS -> FILTER -> TARGET
Now, when I manually run the query generated by the sources, adding the 2 lookups with a LEFT JOIN and sorting, the query takes about 30 seconds.
I ran the same map in my DEV environment to try to debug it , but suddenly it ran in 2 minutes(connected to the same connection as in the PRODUCTION , and the map is trunc/insert)
I looked up the history of this session, and its running time is between 6 minutes up to hour+ , with the same amount of data every day!
I've tried adding statistics/increasing the commit interval but nothing seems to help.
Any suggestions?
Thanks in advance.
First thing, the query from source (with lookups) return you data within 30 seconds doesn't mean you will get all data by 30 second. The SQL client tool shows only first 50 to 500 records. Extracting complete data set may need more time.
Now, i don't see many reasons for slowness. Here are my thoughts -
Did you find any pattern of slowness? Like during month end or month start etc.? All i can see is mainly source and lookup (if table) data may be reason of slowness. See, when a table size varies rapidly or table is not analyzed or table undergoes lot of delete/load operation, its cost varies and SQL becomes slower. Make sure stats are gathered periodically for lookup and source tables.
May be some other operations (that is running in parallel to your map) is eating up all your resources so its taking 1 hour to complete the map.
how much data it processes? In thousands, millions or billions? Depending on that you can re-arrange map like this source > source qual> lookup > filter > sorter + aggregator > target to improve performance.
I Created a view by joining multiple Tables and also Uses a SubQuery.
and all the columns from view are mapped to Aplication
In a view only 1000 Rows are there and it takes 2 min to run
Issue:
In Application it takes 40 to 50 sec to fetch data from the View
Is the Database Issue or Network related issue please suggest me
I have a small site running Flynax classifieds software. I get 10/15 users concurrent users at the most. Sometimes I get very high load avg that results in outages and downtime problems on my server.
I run
root#host [~]# mysqladmin proc stat
and I see this :
Uptime: 111346 Threads: 2 Questions: 22577216 Slow queries: 5159 Opens: 395 Flush tables: 1 Open tables: 285 Queries per second avg: 202.766
Are 202.766 queries per second is normal for a small site like mine ?!
The hosting company is saying, my app is poorly coded and must be optimized.
The Flynax developers are saying, the server is poor and weak and must be replaced.
I'm not sure what to do? any help is much appreciated.
202.766 queries per second isn't normals for small website you described.
Probably some queries run in a loop and that is why you have such statistics.
As far as I know the latest flynax versions has mysql debug option, using this option
you can see how many queries run on the page and how much time each query executes.
Cheers