Does anyone know of a better GUI client for displaying Windows System Monitor log files? (System Monitor is sometimes called Performance Monitor.) I'm trying to track a long-term memory leak in a C# application running on Windows XP or 2K3 by comparing memory usages to run logs.
Specifically I want a client that will allow me to see the following (because System Monitor is unable or difficult):
Specify exact date time ranges for viewing data (or at least finer granularity than hours)
Show time intervals along the horizontal axis
Show max, min, average for the time range
Somewhere show the interval on which source data was captured (1 sec, 5 min, etc.)
(If no such thing exists I'm willing to hear recommendations for better long term performance/memory capturing tools.)
Edit: I've done Google searches and haven't found anything except tutorials on how to create System Monitor logs.
See this question.
The PAL tool does a nice job of creating an HTML report with charts and graphs. By creating your own Threshold file you can control what goes into the report.
While I accepted Patrick Cuff's answer, for my needs I found a better way to graph the data: Excel
It still doesn't provide everything I need, but it is a marked improvement over the System Monitor GUI. I use the relog command line tool to convert the log into a CSV, and then import the CSV into Excel. Excel does not automatically handle the third one, but I can add new columns to graph, and it does allow me to have better control over which data I'm displaying.
One of the tricks that I have used in the past is to use performance/system monitor to log this data to a SQL database. SQL Expression can work great for this. Then you can generate reports using Reporting Services or for the more adventurous types you can do some cube analysis with Analysis Services. So while this does not solve the UI problem, it does allow you to make your own UI. When I had done this previously I just used a simple Reporting Services graph.
SCOM 2007 with reporting services actually does a pretty good job of this. If not the SQLh2 tool is almost as good and its free. You will probably have to customize the reports yourself though
Related
I need to monitor my computer's processor usage, RAM usage and disk usage in certain time period and get a log file or some text output with the details of above resource usage. I don't need any complex data but simply usage over time should be enough. I tried few tools including Windows OS's built in Performance Monitor but had no luck. I only need usage over time as a log file or some other simple text manner.
My OS is windows and I don't need to do this in any other OS.
Can anybody help me? Even point me in right direction would be also helpful.
You can create a Data Collector Set which will store the metrics of your choice into .blg file, the file can be opened with Windows Performance Monitor so you will be able to see the counters values in form of zoomable charts.
If you need the data to be in a text format, i.e. in CSV, you can use Relog program to convert it.
Alternative option which might be easier is using Servers Performance Monitoring plugin which is designed for using with Apache JMeter, this way you will be able to integrate Windows performance counters information with the performance test report. Moreover, it will work on any operating system supported by underlying SIGAR libraries. See How to Monitor Your Server Health & Performance During a JMeter Load Test article if interested.
Whilst I find Jitterbit 4 a fairly powerful tool, I guess that my company and I have kinda maxed out the capabilities of v4 of the thing, or so it seems.
I am trying to keep some now business critical processes alive, and finding that I'm swimming against the tide.
Any experience of improvements to be gained to moving to a later version of Jitterbit that make this route worthwhile, or time to move to a more able platform? I've used in the past Business Objects DM, but I don't think our budget would stretch to that.
I've done some limited research, but I need more information than some generalized blog quotes to form a case for either upgrading, or moving platform.
I'd like to assign multiple automated triggers - for example M-F every 15 minutes, S&S every hour. It would be nice to be able to open more than one project at a time in the IDE.
I have to look after a number of processes which take data from CSV files, or MySQL/MSSQL tables, and upload to Netsuite CRM, or extract data from Netsuite CRM and move to MySQL/MSSQL. (interaction with Netsuite is via SOAP requests using XML) Up until November these processes were generally run perhaps 3 or 4 times a day, but a number of processes now are running at 15 or 5 minute intervals. I've done some optimisation work, but the server is running pretty much at max speed - the limit being that we can update up to 2000 records per hour to Netsuite. And the company wants to do more in 2015.
The limit to Netsuite is absolute - however the problems I am wanting to sort out include better control of logging - I can't seem to turn off logging on bits I don't want or need to be logged. I'd like to be able to open two projects in one IDE, so I can compare code. And I'd like to be able to open the development IDE on one server, but open the admin panel to view the other server - the IDE I use allows only one login.
If Talend or something else can offer these sorts of advantages then perhaps it's the way to go - especially as Jitterbit isn't a skill found in a lot of DevOps here in the UK, but Talend and other things are.
I'm going to start this by saying I really don't have any knowledge about Jitterbit at all so have no real comparison. The other thing to add is that some of the things you want are available in the enterprise licences for Talend but not in the free Talend Open Studio (TOS) edition. If you have absolutely zero budget you could probably get by with TOS and using external scripts to build your jobs and projects and to run them using Cron or some other way of launching the built JARs.
I'll start by talking about what you can do with the enterprise editions of Talend (such as Talend Enterprise Data Integration).
Talend's enterprise editions come with a Talend Administration Centre (TAC) that can be used to schedule jobs on multiple triggers and deploy on chosen Job Execution Servers to run the jobs. It's pretty trivial to set up Cron style triggers to run every 15 minutes M-F and then another one to run every hour Saturday and Sunday. The TAC also provides a centralised reference to all of the Talend cluster's configuration and settings as well as creating users and assigning privileges. You can also see some logging when Talend is configured to use the Activity Monitoring Console (AMC) and then any job level logging can be configured in the job itself and then viewed in the execution history of the task.
I'm not sure what you mean about being able to open two projects at a time to be able to compare code and what you'd use it for but you can open multiple jobs at the same time to look at them. Multiple projects at the same time is a no go. I guess you could install the Studio twice in separate locations with separate work spaces (Talend Studio is based on Eclipse) and then open a project in each and compare them visually. I'm not really sure why you would do that though.
If you're finding that you have lots of processes running that are maxing out your job execution server you can easily add more job execution servers and deploy some of the tasks to the extra job execution server. Unfortunately you can't just add a bunch of job execution servers and have the TAC load balance the work across them. With just TOS you could always just have more commodity machines that you manually deploy prebuilt binaries to and execute (it's just running a binary JAR so they only need a JRE on them). It might be a bit of a pain to organise though.
Talend's enterprise editions also come with some centralised source control in the form of SVN (although quite bastardised) which is useful if you ever intend to add more team members as putting TOS into source control can be a pain.
As for non enterprise specific things Talend generates reasonably performant Java code (has easily matched any of my requirements with essentially no effort in optimisation up to now). For instance I tend to hit around 3 requests per second when dealing with internal network web services. Obviously if Netsuite simply takes a long time to respond to each request then that might not help.
Talend has out of the box connectors to easily connect to all of your mentioned data sources minus Netsuite directly (although there is an unofficial NetSuite connector on the TalendForge) but as with Jitterbit you should be able to easily do XML over SOAP to talk to it.
If I were you I'd download TOS and see if it does what you need as is. If you think you might want some of the enterprise capabilities then they offer a free 30 day trial.
You might want to try and be critical and think about the things you might potentially lose from moving away from Jitterbit as well.
Is anybody know a good testing tool that can produce a graph containing the CPU cycle and RAM usage?
What I will do for ex. is I will run an application and while the application is running the testing tool will record CPU cycle and RAM Usage and it will make a graph as an output.
Basically what I'm trying to test is how much heavy load an application put on RAM and CPU.
Thanks in advance.
In case this is Windows the easiest way is probably Performance Monitor (perfmon.exe).
You can configure the counters you are interested in (Such as Processor Time/Commited Bytes/et) and create a Data Collector Set that measures these counters at the desired interval. There are even templates for basic System Performance Report or you can add counters for the particular process you are interested in.
You can schedule the time where you want to execute the sampling and you will be able to see the result using PerfMon or export to a file for further processing.
Video tutorial for the basics: http://www.youtube.com/watch?v=591kfPROYbs
Good Sample where it shows how to monitor SQL:
http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/
Loadrunner is the best I can think of ; but its very expensive too ! Depending on what you are trying to do, there might be cheaper alternatives.
Any tool which can either hook to the standard Windows or 'NIX system utilities can do this. This has been a defacto feature set on just about every commercial tool for the past 15 years (HP, IBM, Microfocus, etc). Some of the web only commercial tools (but not all) and the hosted services offer this as wekll. For the hosted services you will generally need to punch a hole through your firewall for them to get access to the hosts for monitoring purposes.
On the open source fron this is a totally mixed bag. Some have it, some don't. Some support one platform, but not others (i.e. support Windows, but not 'NIX or vice-versa).
What tools are you using? It is unfortunately common for people to have performance tools in use and not be aware of their existing toolset's monitoring capabilities.
All of the major commercial performance testing tools have this capability, as well as a fair number of the open source ones. The ability to integrate monitor data with response time data is key to the identification of bottlenecks in the system.
If you have a commercial tool and your staff is telling you that it cannot be done then what they are really telling you is that they don't know how to do this with the tool that you have.
It can be done using jmeter, once you install the agent in the target machine you just need to add the perfmon monitor to your test plan.
It will produce 2 result files, the pefmon file and the requests log.
You could also build a plot that compares the resource compsumtion to the load, and througput. The throughput stops increasing when some resource capacity is exceeded. As you can see in the image CPU time increases as the load increases.
JMeter perfmon plugin: http://jmeter-plugins.org/wiki/PerfMon/
I know this is an old thread but I was looking for the same thing today and as I did not found something that was simple to use and produced graphs I made this helper program for apachebench:
https://github.com/juanluisbaptiste/apachebench-graphs
It will run apachebench and plot the results and percentile files using gnuplot.
I hope it helps someone.
I have a BIRT report with performance problems: it takes approximately 5 minutes to run.
At the beginning I though the problem was the database: this report uses a quite complex SQL Server stored procedure to retrieve data. After a lot of SQL optimizations this procedure now takes ~20 seconds to run (in the management console).
However, the report itself still takes too much time (several minutes). How do I identify the other bottlenecks in BIRT report generation? Is there a way to profile the entire process? I'm running it using the www viewer (running inside Tomcat 5.5), and I don't have any Java event handlers, everything is done using standard SQL and JavaScript.
I watched the webinar "Designing High Performance BIRT Reports" 1, it has some interesting considerations but it didn't help much...
As I write this answer the question is getting close to 2 years old, so presumably you found a way around the problem. No one has offered a profiler for the entire process, so here are some ways of identifying bottle necks.
Start up time - About a minute can be spent here
running a couple reports one after the other or starting a second after the first is running can help diagnosis issues.
SQL Query run time - Good solutions are mentioned in the question
any SQL trace and performance testing will identify issues.
Building the report - This is where I notice the lions share of time being taken. Run a SQL trace while the report is being created. Even a relatively simple tables with lots of data can take around a minute to configure and display (HTML via apache tomcat) after the SQL trace indicates the query is done.
simplify the report or create a clone with fewer graphs or tables run with and without pieces to see if any create a notable difference
modify the query to bring back less records, less records are easier to display,
Delivery method PDF, Excel, HTML each can have different issues
try the report to different formats
if one is significantly greater, try different emitters.
For anyone else having problems with BIRT performance, here are a few more hints.
Profiling a BIRT report can be done using any Java profiler - write a simple Java test that runs your report and then profile that.
As an example I use the unit tests from the SpudSoft BIRT Excel Emitters and run JProfiler from within Eclipse.
The problem isn't with the difficulty in profiling it, it's in understanding the data produced :)
Scripts associated with DataSources can absolutely kill performance. Even a script that looks as though it should only have an impact up-front can really stop this thing. This is the biggest performance killer I've found (so big I rewrote quite a chunk of the Excel Emitters to make it unnecessary).
The emitter you use has an impact.
If you are trying to narrow down performance problems always do separate Run and Render tasks so you can easily see where to concentrate your efforts.
Different emitter options can impact performance, particularly with the third party emitters (the SpudSoft emitters now have a few options for making large reports faster).
The difference between Fixed-Layout and Auto-Layout is significant, try both.
Have you checked how much memory you are using in Tomcat? You may not be assigning enough memory. A quick test is to launch the BIRT Designer and assign it additional memory. Then, within the BIRT Designer software, run the report.
I was looking for ETL tool and on google found lot about Pentaho Kettle.
I also need a Data Analyzer to run on Star Schema so that business user can play around and generate any kind of report or matrix. Again PentaHo Analyzer is looking good.
Other part of the application will be developed in java and the application should be database agnostic.
Is Pentaho good enough or there are other tools I should check.
Pentaho seems to be pretty solid, offering the whole suite of BI tools, with improved integration reportedly on the way. But...the chances are that companies wanting to go the open source route for their BI solution are also most likely to end up using open source database technology...and in that sense "database agnostic" can easily be a double-edged sword. For instance, you can develop a cube in Microsoft's Analysis Services in the comfortable knowledge that whatver MDX/XMLA your cube sends to the database will be intrepeted consistently, holding very little in the way of nasty surprises.
Compare that to the Pentaho stack, which will typically end interacting with Postgresql or Mysql. I can't vouch for how Postgresql performs in the OLAP realm, but I do know from experience that Mysql - for all its undoubted strengths - has "issues" with the types of SQL that typically crops up all over the place in an OLAP solution (you can't get far in a cube without using GROUP BY or COUNT DISTINCT). So part of what you save in licence costs will almost certainly be used to solve issues arising from the fact the Pentaho doesn't always know which database it is talking to - robbing Peter to (at least partially) pay Paul, so to speak.
Unfortunately, more info is needed. For example:
will you need to exchange data with well-known apps (Oracle Financials, Remedy, etc)? If so, you can save a ton of time & money with an ETL solution that has support for that interface already built-in.
what database products (and versions) and file types do you need to talk to?
do you need to support querying of web-services?
do you need near real-time trickling of data?
do you need rule-level auditing & counts for accounting for every single row
do you need delta processing?
what kinds of machines do you need this to run on? linux? windows? mainframe?
what kind of version control, testing and build processes will this tool have to comply with?
what kind of performance & scalability do you need?
do you mind if the database ends up driving the transformations?
do you need this to run in userspace?
do you need to run parts of it on various networks disconnected from the rest? (not uncommon for extract processes)
how many interfaces and of what complexity do you need to support?
You can spend a lot of time deploying and learning an ETL tool - only to discover that it really doesn't meet your needs very well. You're best off taking a couple of hours to figure that out first.
I've used Talend before with some success. You create your translation by chaining operations together in a graphical designer. There were definitely some WTF's and it was difficult to deal with multi-line records, but it worked well otherwise.
Talend also generates Java and you can access the ETL processes remotely. The tool is also free, although they provide enterprise training and support.
There are lots of choices. Look at BIRT, Talend and Pentaho, if you want free tools. If you want much more robustness, look at Tableau and BIRT Analytics.