Can anyone recommend disk I/O benchmarking software for Windows? - windows

I want to test the performance of a filesystem under different conditions.
Specifically I want to test the performance of Windows virtual machines without compression and with compression both on "normal harddisk" and on USB-disk as it would be interesting to see exactly what the difference is.
What I need is a program that can test different aspects of filesystem (random access, sequential read/write, etc) and make pretty graphs that go well with my blog. Preferrably the application should be automated so I can add it to startup, this way the timing is the same for each run and I can repeat the runs for verification.
I can post a link to the results here when I get around to testing it. Right now its just in the planning phase.

Iometer is the I/O measurement tool. And it's free. From the website:
Iometer is an I/O subsystem
measurement and characterization tool
for single and clustered systems. It
was originally developed by the Intel
Corporation and announced at the Intel
Developers Forum (IDF) on February 17,
1998 - since then it got wide spread
within the industry.
Meanwhile Intel has discontinued to
work on Iometer and it was given to
the Open Source Development Lab
(OSDL). In November 2001, a project
was registered at SourceForge.net and
an initial drop was provided. Since
the relaunch in February 2003, the
project is driven by an international
group of individuals who are
continuesly improving, porting and
extend the product.
The tool (Iometer and Dynamo
executable) is distributed under the
terms of the Intel Open Source
License. The iomtr_kstat kernel module
as well as other future independent
components are distributed under the
terms of the GNU Public License.

You said you'd like pretty graphs for your blog. In my use of IOMeter, I've never seen it produce a graph. However, it is possible that I overlooked an existing feature.
Alternatively, (from the look of its website) iozone might give you graphs:
http://www.iozone.org/
Yet, it could be that iozone only collected the data used to create those graphs shown on its web site.
Regardless, this is still another option for I/O Benchmarking.

Additional server oriented disk benchmarks:
Diskspd
fio
vdbench

Related

Parallel programming service on internet

Here is my question:
Is tere any service or technology to run parallel algorythms on more computer without knowing them?
For example: I write a parallel algorythm. My friends install a simple client app, and if they have internet connection, they can help my calculation with their free processor capacity. I would like to see them like an additional core in my CPU.
If there is no technology like that, is there any unsolvable problems with developing one? (I know there must be a lot problems with code trasfering, operation systems, and compatibility)
I believe that you can use BOINC to set up your own volunteer computing project. But I have no experience of this to report.

Performance Testing Tool That Can Produce a Graph

Is anybody know a good testing tool that can produce a graph containing the CPU cycle and RAM usage?
What I will do for ex. is I will run an application and while the application is running the testing tool will record CPU cycle and RAM Usage and it will make a graph as an output.
Basically what I'm trying to test is how much heavy load an application put on RAM and CPU.
Thanks in advance.
In case this is Windows the easiest way is probably Performance Monitor (perfmon.exe).
You can configure the counters you are interested in (Such as Processor Time/Commited Bytes/et) and create a Data Collector Set that measures these counters at the desired interval. There are even templates for basic System Performance Report or you can add counters for the particular process you are interested in.
You can schedule the time where you want to execute the sampling and you will be able to see the result using PerfMon or export to a file for further processing.
Video tutorial for the basics: http://www.youtube.com/watch?v=591kfPROYbs
Good Sample where it shows how to monitor SQL:
http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/
Loadrunner is the best I can think of ; but its very expensive too ! Depending on what you are trying to do, there might be cheaper alternatives.
Any tool which can either hook to the standard Windows or 'NIX system utilities can do this. This has been a defacto feature set on just about every commercial tool for the past 15 years (HP, IBM, Microfocus, etc). Some of the web only commercial tools (but not all) and the hosted services offer this as wekll. For the hosted services you will generally need to punch a hole through your firewall for them to get access to the hosts for monitoring purposes.
On the open source fron this is a totally mixed bag. Some have it, some don't. Some support one platform, but not others (i.e. support Windows, but not 'NIX or vice-versa).
What tools are you using? It is unfortunately common for people to have performance tools in use and not be aware of their existing toolset's monitoring capabilities.
All of the major commercial performance testing tools have this capability, as well as a fair number of the open source ones. The ability to integrate monitor data with response time data is key to the identification of bottlenecks in the system.
If you have a commercial tool and your staff is telling you that it cannot be done then what they are really telling you is that they don't know how to do this with the tool that you have.
It can be done using jmeter, once you install the agent in the target machine you just need to add the perfmon monitor to your test plan.
It will produce 2 result files, the pefmon file and the requests log.
You could also build a plot that compares the resource compsumtion to the load, and througput. The throughput stops increasing when some resource capacity is exceeded. As you can see in the image CPU time increases as the load increases.
JMeter perfmon plugin: http://jmeter-plugins.org/wiki/PerfMon/
I know this is an old thread but I was looking for the same thing today and as I did not found something that was simple to use and produced graphs I made this helper program for apachebench:
https://github.com/juanluisbaptiste/apachebench-graphs
It will run apachebench and plot the results and percentile files using gnuplot.
I hope it helps someone.

Software patching at a billion miles

Could someone here shed some light about how NASA goes about designing their spacecraft architecture to ensure that they are able to patch bugs in the deployed code?
I have never built any “real time” type systems and this is a question that has come to mind after reading this article:
http://pluto.jhuapl.edu/overview/piPerspective.php?page=piPerspective_05_21_2010
“One of the first major things we’ll
do when we wake the spacecraft up next
week will be uploading almost 20 minor
bug fixes and other code enhancements
to our fault protection (or “autopilot
response”) software.”
I've been a developer on public telephone switching system software, which has pretty severe constraints on reliability, availability, survivability, and performance that approach what spacecraft systems need. I haven't worked on spacecraft (although I did work with many former shuttle programmers while at IBM), and I'm not familiar with VXworks, the operating system used on many spacecraft (including the Mars rovers, which have a phenomenal operating record).
One of the core requirements for patchability is that a system should be designed from the ground up for patching. This includes module structure, so that new variables can be added, and methods replaced, without disrupting current operations. This often means that both old and new code for a changed method will be resident, and the patching operation simply updates the dispatching vector for the class or module.
It is just about mandatory that the patching (and un-patching) software is integrated into the operating system.
When I worked on telephone systems, we generally used patching and module-replacement functions in the system to load and test our new features as well as bug fixes, long before these changes were submitted for builds. Every developer needs to be comfortable with patching and replacing modules as part of their daly work. It builds a level of trust in these components, and makes sure that the patching and replacement code is exercised routinely.
Testing is far more stringent on these systems than anything you've ever encountered on any other project. Complete and partial mock-ups of the deployment system will be readily available. There will likely be virtual machine environments as well, where the complete load can be run and tested. Test plans at all levels above unit test will be written and formally reviewed, just like formal code inspections (and those will be routine as well).
Fault tolerant system design, including software design, is essential. I don't know about spacecraft systems specifically, but something like high-availability clusters is probably standard, with the added capability to run both synchronized and unsynchronized, and with the ability to transfer information between sides during a failover. An added benefit of this system structure is that you can split the system (if necessary), reload the inactive side with a new software load, and test it in the production system without being connected to the system network or bus. When you're satisfied that the new software is running properly, you can simply failover to it.
As with patching, every developer should know how to do failovers, and should do them both during development and testing. In addition, developers should know every software update issue that can force a failover, and should know how to write patches and module replacement that avoid required failovers whenever possible.
In general, these systems are designed from the ground up (hardware, operating system, compilers, and possibly programming language) for these environments. I would not consider Windows, Mac OSX, Linux, or any unix variant, to be sufficiently robust. Part of that is realtime requirements, but the whole issue of reliability and survivability is just as critical.
UPDATE: As another point of interest, here's a blog by one of the Mars rover drivers. This will give you a perspective on the daily life of maintaining an operating spacecraft. Neat stuff!
I've never build real-time system either, but in those system, I suspect their system would not have memory protection mechanism. They do not need it since they wrote all their own software themselves. Without memory protection, it will be trivial for a program to write the memory location of another program and this can be used to hot-patch a running program (writing a self-modifying code was a popular technique in the past, without memory protection the same techniques used for self-modifying code can be used to modify another program's code).
Linux has been able to do minor kernel patching without rebooting for some time with Ksplice. This is necessary for use in situations where any downtime can be catastrophic. I've never used it myself, but I think the technique they uses is basically this:
Ksplice can apply patches to the Linux
kernel without rebooting the computer.
Ksplice takes as input a unified diff
and the original kernel source code,
and it updates the running kernel in
memory. Using Ksplice does not require
any preparation before the system is
originally booted (the running kernel
does not need to have been specially
compiled, for example). In order to
generate an update, Ksplice must
determine what code within the kernel
has been changed by the source code
patch. Ksplice performs this analysis
at the ELF object code layer, rather
than at the C source code layer.
To apply a patch, Ksplice first
freezes execution of a computer so it
is the only program running. The
system verifies that no processors
were in the middle of executing
functions that will be modified by the
patch. Ksplice modifies the beginning
of changed functions so that they
instead point to new, updated versions
of those functions, and modifies data
and structures in memory that need to
be changed. Finally, Ksplice resumes
each processor running where it left
off.
(from Wikipedia)
Well I'm sure they have simulators to test with and mechanisms for hot-patching. Take a look at the linked article below - there's a pretty good overview of the spacecraft design. Section 5 discusses the computation machinery.
http://www.boulder.swri.edu/pkb/ssr/ssr-fountain.pdf
Of note:
Redundant processors
Command switching by the uplink card that does not require processor help
Time-lagged rules
I haven't worked on spacecraft, but the machines I've worked on have all been built to have a stable idle state where it's possible to shut down the machine briefly to patch the firmware. The systems that have accommodated 'live' updates are those that were broken into interacting components, where you can bring down one segment of the system long enough to update it and the other components can continue operating as normal, as they can tolerate the temporary downtime of the serviced component.
One way you can do this is to have parallel (redundant) capabilities, such as parallel machines that all perform the same task, so that the process can be routed around the machine under service. The benefit of this approach is that you can bring it down for longer periods for more significant service, such as regular hardware preventative maintenance. Once you have this capability, supporting downtime for a firmware patch is fairly easy.
One of the approaches that's been used in the past is to use LISP.

Cost of system integration?

On the software development projects that you have worked on, what has been the approximate cost (expressed as a percentage of total system cost) of system integration? System integration includes integrating with other software, databases, etc.
33.3% because system integration is usually associated with a fair amount of risk that is not as prevalent in other phases of the projects (coding, documentation, etc).
This is a very difficult value to estimate, especially when you are facing integrating with a system that you are not familiar with. The best you can do is track you or your team's past performance on similar projects and use those values to try to estimate how you will perform on new projects.
Generally, system integration will take longer if:
It uses a protocol, database engine, operating system, etc. that you or your team have not yet worked with.
Vendor or community support is lacking or unresponsive.
Official system documentation is not detailed enough or is out of date.
The system does not have large global market share. Such a system will not have a wide user base and a big footprint in online programming Q&A sites such as this one. This may include new, less popular, or highly domain-bound systems.
Between 0 and 99%. I have built systems with no integration at all and systems that were basically just integration of other systems. The nice thing about integration can be that it is easy to estimate. But only when the interface is fully understood. Then it is just a duplication of functionality.
There are some complicating factors, though. They can make it very expensive to impossible:
is the system you have to integrate with well understood (do the programmers who developed it still work there?)
is the system you have to integrate with well-refactored (and has automated unit and acceptance tests)?
single or multiple platform?
are domain experts available?
It depends on the integrated system's importance and other factors.
I've worked in systems with integration in a bunch of web services that were the application's core. If the web services were down, our system was simply useless.
I would list the following variables when trying to evaluate the cost:
How many systems do you integrate and how frequently are they changed?
Do you have documentation to these systems?
Is it a third party component/service that you have no control of?
If you have control over the integrated system, does it use too much "legacy" code, like COBOL; (just an example, at least where I work COBOL programmers are expensive);
Are your employees experienced with the integrated system and with the application itself?
In case of failure of the integrated service, what is the impact on your application?
How much is an employee's hour rate in these scenarios? How many hours they would need to work on these integrated systems? How much money do you have for your project? I can't say it's going to cost X% on your case without knowing these details, specially the last one.

Has anyone tried their software with ReactOS yet?

The Free MS Windows replacement operating system ReactOS has just released a new version. They have a large and active development team.
Have you tried your software with it yet?
if so what is your recommendation?
Is it time to start investigating it as a serious Windows replacement?
Targeting ReactOS specifically is a bit too narrow IMO -- perhaps a better focus is to target compatibility with WINE. Because ReactOS shares so many of its usermode DLLs with WINE, targeting WINE should result in the app running just fine on ReactOS.
Of course, there will always be things that WINE can't emulate well (hence the need for ReactOS). In this way, it seems that if something runs in WINE, it will run in ReactOS, whereas the fact that something runs in ReactOS doesn't mean that it will necessarily run in WINE.
Targeting WINE is well documented, perhaps easier to test, and by definition, should make your app compatible with ReactOS as a matter of course. In this way, you're not only gathering the rather large user base of current WINE users, but you're future-proofing yourself for whenever anyone wants to use your app with ReactOS.
In their homepage, at the Tour you can see a partial list of office, tools and games that already run OK (or more or less) at ReactOS. If you subscribe to the newsletter, you'll receive info about much more - for instance, I was quite surprised when I read most SQL Server 2000 tools actually work on ReactOS!! Query Analyzer, OSQL and Books Online work fine, Enterprise Manager and Profiler are buggy and the DBMS won't work at all.
At a former workplace (an all MS shop) we investigated seriously into it as a way to reduce our expenditure in licenses whilst keeping our in-house developed apps. Since it couldn't run MSDE fine, we had to abandon the project - hope in the future this will be solved and my ex-coworkers can push it again.
These announcements might as well be also on their homepage - I couldn't find them after 5 mins. of searching, though. Probably the easiest way to know all these compatibility issues is to join the newsletter, or look for its archives.
I have been tracking this OS' progress for quite some time. I believe it has all the potential to really bring an OSS operating system to the masses for it breaks the "chicken and egg" problem: it has applications and drivers from the very beginning (since it aims to have full ABI compatibility with MS Windows).
Just wait for their first beta, I won't be surprised if they surpass Linux in popularity really soon after that...
Post Edit: Found it! Look at section Support Database, it's the web place to go look for whether a particular piece of hardware of some program works on ReactOS.
ReactOS has been under development for a long long time.
They were in some hot water earlier because some of their code appeared to be line by line dissasembly of some NT kernel code, I think they have replaced all of it.
I wouldn't bother with cross platform testing until they hit the same market penetration as Linux, which I would wager is never.
Until ReactOS doesn't randomly crash just sitting there within 5 minutes of booting, I won't worry about testing my code on it. Don't get me wrong, I like ReactOS, but it's just not stable enough for any meaningful testing yet!
No, I do not think it is time to start thinking of it as a Windows replacement.
As the site states, it's still in the Alpha stages. More importantly, whos Windows replacement? Yours? Your users? The former is one thing, the latter is categorically a no-go.
As an aside, I'm not really sure who this OS is targetting. It has to be people who rely on Windows software but don't want to pay, because people who simply don't want Windows can use MacOS / Linux, and the support (community or otherwise) for these choices is good.
Moreover, if you use Linux you already have some amounts of Windows software support via Wine.
Back to people who rely on Windows software but don't want to pay. If they are home users they can just simply pirate it, if they are large business users they already have support contracts and trained people etc. It's hard enough for large businesses to be OK to update to new versions of Windows, let alone an open source replacement.
So I suppose that leaves small businesses who don't want to obtain illegal copies of MS software, can't afford the OS licences and rely on software that only runs on Windows and has bad of non-existent Wine compatibility.
It is a useful replacement for Windows when it runs 'your' software without crashing. At the moment it is not a general purpose os as it is too unstable (being only alpha) but people have used ReactOS successfully in anger for specific tasks already. As a windows replacement it has multiple potential uses, sandbox systems, test and development systems, multiple virtual instances, embedded devices, even packaging/bundling legacy apps with their own compatible o/s. Driver and application compatibility, freed from Microsoft's policy of planned obsolescence and regular GUI renewal, what's not to like?

Resources