Grid Control gets slower over time - oracle

We use Grid Control 10.2.0.4, with a catalog repository database also at 10.2.0.4. It seems that after a week or two of being up, the response time of the web interface gets very poor (20+ seconds to navigate to a new page, when normally 2-3 seconds is seen). The only thing we've found to overcome it is a restart of the catalog database and the GC/OMS. No errors reported in the alert log, just unbearable slowness. Are there any Oracle DBA's using GC out there who have seen this (and hopefully found a solution)?

We migrated to the same version you are on about a month ago and have not experienced the problems you are. At least not yet. Have you checked the OMS logs? Is there excessive CPU usage and/or disk usage?

It appears that the problem is a symptom of a larger issue with the OMS. The direct cause seems to be that the OMS_HOME/sysman/recv/clob directory contained nearly 100K files. Why that is happening is the subject of an open SR with Oracle. Under Oracle's direction, I deleted nearly everything in that directory, and the performance has improved significantly.
Update, 1/15/2009:
Oracle has opened a bug report for this issue. Anyone on a windows platform running GC 10.2.0.4 should keep an eye on this directory and periodically clear it out if it's not happening automatically.

Related

Sitecore Sites.Config

In our team we have a Sitecore solution that we run locally. Some developers where noticing a high start up time, while other didn't.
After investigating the differences between the local config files, I've noticed that 1 particular setting was causing this high start up time.
The settings was located in the sites.config transform. When adding a transform to enablePreview to put the enablePreview to False, the start up time increased with more than 100%.
When removing this, the start up time was much better.
This seems pretty strange to me, because disabling the preview, should make your Sitecore solution faster right?
Anyone a good explanation for this?
To speed up the start up time you could, among other things, disable SPEAK precompilation. For an extensive list regarding developer productivity, read this blog by John West.
I know this is a couple months old, but try using the Sitecore debugger by appending ?sc_debug=1&sc_prof=1&sc_trace=1&sc_ri=1 to the end of your URL.
That will allow you to see anything that takes an exceptional amount of time to load.
Another thing that could increase startup time is to disable/decrease caching and disable indexing in your dev enviornment.
While in production it speeds up the performance of the site, it also slows down startup time. You can set the tweaks in Debug web.config transformations.
Another thing that could increase start-up time is to disable caching and disable indexing for Master database because no need to indexing for master database in your dev environment.

Drupal development: performance

as the single user / developer on a drupal website im experience serious performance problems. several issues occur:
usually i develop drupal on our company dev server but now im at a client's office. the IT guys installed a VM with WAMP on the server they usually use for .net development. on the first day of dev (installing drupal, required modules and configuring them) httpd.exe would max out the cpu and loading any page would take minutes. IT guys just scratch their heads.
i then just installed WAMP on the local machine they gave me: some 299,99 Win XP Dell piece of sh*t, nevertheless a P4 2.8Ghz 2GB Ram. the fan blows so loud the entire office is giving me dirty looks. Again httpd.exe maxes out. again, any page (esp admin ones) takes minutes to load
in firefox, the views UI is completely unworkable. alot of stuff is loaded with ajax and it again takes minutes to see the various html elements dynamically inserted in the UI to appear - try to imagine this.
Chrome seems to handle the JS a bit better but it still takes way too long to complete any kind of action.
the devel_themer module ads tons of markup to the page which leads to "Allowed memory size of X exhausted" errors (memory_limit = 128MB ).
now im at the themeing stage where i need to do a LOT of page refreshes. I NEED firebug which requires firefox which in its turn eats up CPU and RAM. What usually takes seconds now takes minutes and by the time whatever action is completed, i forgot what i was doing. im basically reading news stories in between every page reload.
now, i know drupal is resource intensive but that its impossible to develop on a typical Dell / Win XP machine is a bit much, no? at home i work on an iMac and everything runs silky smooth.
i cant imagine im the only guy with this problem since what im doing is basically drupal 101 (no custom modules so far ...). unless someone can offer a solution, im concluding that you basically can not develop a typical drupal site on a normal home desktop computer.
what gives?
So you have abandoned the VM,check you php.ini file for the memory limit, increase it and see if there is a performance boost. its usually set to a default of 16M.
HTH
I'd suggest you either make sure to spend some time actually tuning your XP system, because the default WAMP config is definitely suboptimal, or consider an alternative, like Zend Server community edition (ZCE). Although not completely free as in speech, it is free as in beer, and simply builds up on top of a better default config for Apache and MySQL.
Although less convenient than WAMP or ZCE since not bundled, a manual install of Apache 2.2 is also usually a good choice.
Also note that, that the way devel_themer works, it is constantly building files in your temp directory, meaning that unless that directory is cleaned regularly, files will accumulate and directory browses will become exceedingly slow. Only a cron.php run will cause drupal to clean those files, for an up-to-date version of devel. See my patch adding this cleanup at http://drupal.org/node/303443
Finally, you mention Firebug, and you might be using the Drupal for Firebug module, which has known performance issues, apparently related to infinite recursion in some cases; although recent versions are supposed to fix this problem. See for instance http://drupal.org/node/303443
A couple things I've run into that could potentially help.
Unless you actually need it, turn off Locale. It causes a ton of extra queries (at least the last time I looked into it, this may have changed) so if you're not using it then don't put the unnecessary load on your DB.
Just like on a regular development machine, make sure MySQL is properly tuned and configured. This goes for any setup; local, development or production. 3/4 of the time the database is the bottleneck so start there.
If you've got the devel module installed and enabled it should have a query log you can tell it to output at the bottom of the page, this should help you with number 2.

Terrible DotNetNuke performance

I'm involved with a project using DotNetNuke version 05.01.04 Community Edition. We are building our new Intranet using it, but performance is terrible.
We have five people adding pages and content to it and every 15-30 seconds they experience a pause of 10 seconds or longer before the system continues and the next screens loads.
The server is Windows 2003, 3.8GHz with 1GB of RAM. I'm told by our server admin that the CPU and memory performance don't appear to be the bottleneck.
We currently have 350 pages in the system, we a plan to add 1000. So we need to resolve this performance problem so that we can enter content and so we can go live.
I just can't see where the bottleneck is. Is there a good why to determine the bottleneck when using DotNetNuke?
Modules installed
Publish:Engage (Not currently in
use)
Page Blaster (Doesn't appear
to providing caching when users
logged in using Integrated
Authentication)
SimpleGallery
XMod
Content Manager
IIS Setup
Application recycling completely disabled (Apart from a 2am recycle)
New findings: 18th March 2010
The main bottleneck was due to version 5.1.4 having a bug which caused 1300 database roundtrips on an average page, due to broken database in-memory caching. We've upgraded to 5.2.4 which has resolved this bottleneck.
Now the next biggest bottleneck is the navigation. We've used both DDR:Menu and DDN:Nav, but both have a major impact on performance.
Is there a navigation interface out there that doesn't drain performance so badly?
I think you need to start investigating this using performance profiling tools. For the DNN application itself I'd grab something like JetBrains DotTrace or Red Gate's ANTS Performance Profiler.
For the database SQL Server Profiler would be the first choice or a tool such as Red Gate's SQL Response.
Without profiling the application these you're going to be pulling at straws.
And as Tim pointed out in his comment, installing Firebug in Firefox with the YSlow add-in to see what resources are taking longest to serve to the browser.
Mitchel Sellers has some good tutorials and checklists to go through with regards to performance in DNN. Start with Explaining High Performance DotNetNuke Configuration and Management (which points to some of his earlier articles).
I have several years of dnn development and maintainance experience, when I have this kind of problem, I start doing things from database clean up. Next thing is, find for missing indexes, and/or rebuild all the indexes periodically (sql job scheduled for that) but major performance gain would be from clean up of table
Another good considerations would be, disabling trace, debug mode to false and turn off features of dnn that you don't use (scheduler is the first one to turn off)
Edit: consider keep alive as well
Hope this helps
Is your database on that server? If so, just throw in some more RAM, or get a faster disk array...
Have you considered creating this lot of pages directly through TSQL? It's not hard to do and may save you a lot of time.

Performance issue with site speed 'w3p.exe’ process –reaches 99/100%

My site goes slow and stops access certain services externally if we check the Process monitor we see that it is normally due to the ‘w3p.exe’ process – which is the background process for running the website – it regularly reaches 99/100% - killing the process/restarting the WebPublishing service reolves tis – my webhost says this can only be due to bad coding ....can someone comment on this ??…
When performance testing a reasonably straightforward website (coded in ASP.Net) I saw it slow to a crawl with memory use going through the roof over time. Each time recycling the w3wp process restored performance back to normal.
I never got around to figuring out why (the load we were testing with was way above normal, and it could have been worked around anyway by recycling the w3wp service more frequently), but my bet would have been that it was viewstate causing the slowdown. A lot of pages had very large viewstate which wasnt being used in any way - I can fesably see how loading large viewstate values might cause memory related performance degregation over time.
What language is the site coded in? I recently ran into the same problem on a server running IIS6/PHP and found the following bug -
http://bugs.php.net/bug.php?id=37575
Upgrading PHP to 5.3 solved the issue.

How big can a Sourcesafe DB be before "problems" arise?

We use SourceSafe 6.0d and have a DB that is about 1.6GB. We haven't had any problems yet, and there is no plan to change source control programs right now, but how big can the SourceSafe database be before it becomes an issue?
Thanks
I've had VSS problems start as low as 1.5-2.0 gigs.
The meta-answer is, don't use it. VSS is far inferior to a half-dozen alternatives that you have at your fingertips. Part of source control is supposed to be ensuring the integrity of your repository. If one of the fundamental assumptions of your source control tool is that you never know when it will start degrading data integrity, then you have a tool that invalidates its own purpose.
I have not seen a professional software house using VSS in almost a decade.
1 byte!
:-)
Sorry, dude you set me up.
Do you run the built-in ssarchive utility to make backups? If so, 2GB is the maximum size that can be restored. (http://social.msdn.microsoft.com/Forums/en-US/vssourcecontrol/thread/6e01e116-06fe-4621-abd9-ceb8e349f884/)
NOTE: the ssarchive program won't tell you this; it's just that if you try to restore a DB over 2GB, it will fail. Beware! All these guys who are telling you that they are running fine with larger DB are either using another archive program, or they haven't tested the restore feature.
I've actually run a vss db that was around 40 gig. I don't recommend it, but it is possible. Really the larger you let it go, the more you're playing with fire. I've heard instances where the db gets corrupted, and the items in source control were unrecoverable. I would definately back it up on a daily basis and start looking to change source control systems. Having been in the position of the guy who they call when it fails, I can tell you that it will really start to get stressful when you realize that it could just go down and never come back.
Considering the amount of problems SourceSafe can generate on its own, I would say the size has to be in the category "Present on disk" for it to develop problems.
I've administered a VSS DB over twice that size. As long as your are vigilant about running Analyze, you should be OK.
Sourcesafe recommends 3-5G with a "don't ever go over 13G".
In practice, however, ours is over 20G and seems to be running fine.
The larger you get, Analyze will find more and more problems including lost files, etc.
EDIT: Here is the official word: http://msdn.microsoft.com/en-us/library/bb509342(VS.80).aspx
I have found that Analyze/Fix starts getting annoyingly slow at around 2G on a reasonably powerful server. We run Analyze once per month on databases that are used by 20 or so developers. The utility finds occasional fixes to perform, but actual use has been basically problem free for years at my workplace.
The main thing according to Microsoft is make sure you never run out of disk space, whatever the size of the database.
http://msdn.microsoft.com/en-us/library/bb509342(VS.80).aspx
quote:
Do not allow Visual SourceSafe or the Analyze tool to run out of disk space while running. Running out of disk space in the middle of a complex operation can create serious database corruption

Resources