Network Usage of Process Using Powershell - performance

I want to calculate the bytes sent and recieved by a particular process .I want to use powershell for that.
Something which I can do using Resource Monitor->Network Activity.
How can i do that using get-counter?

There is a really good Scripting Guy article on using the Get-Counter cmdlet here:
Scripting Guy - Get-Counter
The trick will be finding the counter that you need, and I don't think you can get the granularity you're after as these are the same counters that PerfMon uses. It's more focused around the whole Network Interface than it is around the individual processes using the interface. With that said, if it's the only thing using the given interface it should do the trick nicely.
Have a look at the Network Interface options available for a start:
(get-counter -list "Network Interface").paths

You can't, it seems. I'm absolutely unable to find the counters the performance monitor is reading from, though other people may chime in. There may be some other way than get-counter too, but that is what you specifically requested.
Looking through the counters, the closest thing you will find is the "IO Read Bytes/sec" and "IO Write Bytes/sec" counters on the process object.
The problem with those is that they count more than just network activity. The description in perfmon says:
"This counter counts all I/O activity generated by the process to
include file, network and device I/Os."
That being said, if you know that the process you want to monitor only or mainly writes to the network connection, this may be better than not measuring anything at all.
You'd go about it like this (I'll use Chrome as an example since it is conveniently running and using data right now):
get-counter "\Process(chrome*)\IO Read Bytes/sec"
This will just give you a one-time reading. If you want to keep reading you can add the continous switch.
The PerformanceCounterSampleSet object that is returned is not exactly pretty to work with, but you can find the actual reading in $obj.countersamples.cookedvalue.
The list will be fairly long (if you browse like me). Chrome is running in many separate processes, so we'll do a bit of math to get them all added up, and presented in KB.
Final result:
get-counter "\Process(chrome*)\IO Read Bytes/sec" -Continuous | foreach {
[math]::round((($_.countersamples.cookedvalue | measure -sum).sum / 1KB), 2)
}
Running this will just continously output a reading of how many KB/s Chrome is using.

Related

Bash Cluster Monitor - Help for assignment

I am a student at university and I am stuck with some code for my assignment, and would appreciate the help from the community with this. I am very new to bash, and I would say that I do not know how to write bash at all, I struggle with it. I much prefer python or C# lol. Anyways, I am required to create a cluster monitor. I have got to create a simple menu with 2 options that allows the user to choose between the options "Cluster Status" and "Process Analytics". I have created the menu, it's very basic, and the code for that is below:
#!/bin/bash
clear
echo="Choose one of the following options: "
echo="1) Cluster Status"
echo="2) Process Analytics"
echo="3) Exit"
read ans:
if ["$ans"=='1']
then
echo="Loading..."
bash Cluster_Status.sh
elif ["$ans"=='2']
then
echo="Loading..."
bash Process_Analytics.sh
elif ["$ans"=='3']
then
echo="Loading..."
In this menu, I also need to make it so that when it has completed execution, it pauses and returns to the main menu.
I have got 5 nodes for the cluster, with sample data in them, which can be provided if anyone wants to help with this.
The part that I am most stuck with is the creating of the cluster status and the process analytics part of this.
The cluster status is a script that provides a screen and file printout of the current stats of the entire cluster, with all of its parameters set in a nodeconfig.read.me file, which I have got. The stats need to be presented either in a sum or as an average. For example, the total amount of CPUs would be a sum, and the total CPU load would be an average. I need to do this in a table, which I cannot figure out how to do, as I am limited by the amount of different functions I can use.
The process analytics section asks me to process the current processes that are running on each node, if there are any, and I need to present the following stats onscreen and also have it saved to a file:
• Most popular process (in terms of instances running)
• Most CPU demanding process (in terms of CPU usage)
• Most MEM demanding process (in terms of Memory usage)
• Most Disk demanding process (in terms of Disk usage)
• Most Net demanding process (in terms of Network usage)
• 5 top users of CPU/MEM/DISK & Net (separate tables)
I have got to do this in PuTTy, so I am limited to the built in functions of PuTTy, cat, grep, ls, pipe, echo, tr, tee, cut, touch, head, tail. These are the only things that I can use when writing this, and because of the limitation, I am stuck when trying to find how to write this. I have researched how to do this for the last week, and I still do not know how to complete this. I do not ask for this assignment to be completed by anyone. I just want to ask for help on how to use these commands to be able to create this script. Thank you for your help from now, and I am sorry if this is posted in the wrong area. I am still fairly new with using stackoverflow.

How can I pause execution of a Ruby script and serialize the process to disk?

One script takes lot of time. I would like to pause it (e.g. by pressing p) and save it to HDD (e.g. by pressing s) so I can resume it later from HDD. Libraries like Thread or gems like Celluloid may pause some part of the code, but as far I have seen they cannot save the current process to disk.
Ideally, I would like to put a few lines of codes at the beginning of script or something easy like this.
TL;DR
You are solving the wrong problem. If your script takes too long to run, speed up your script rather than try to serialize an OS process.
Alternative Approaches
If you insist on being able to freeze processes and save state to disk, you may want to consider running your processes inside a virtual machine like VirtualBox or VMware. Both of these products support the ability to pause a virtual machine and save the VM's current state to disk.
I'm unaware of any way to store running OS processes on disk other than inside some sort of virtualization layer. If you really need this functionality, that's the way I'd recommend. However, you'll probably get more bang for your buck by improving the efficiency of your code (profile or benchmark for bottlenecks), scaling up your system, or scaling out your program's tasks in a distributed way.

System Resource Monitor/Graph

I'm looking for an app that does about the same thing at the Performance tab on Task Manager, but on a per-process basis and with more plotted values. At a minimum, I would like to be able to plot CPU and memory usage but it would be nice if it could plot:
Network usage
File system IO (per drive/share sub headings would be nice)
Open file stream count
LSOF like stuff
All the other stats that the Process tab can give you
... anything else ...
Windows comes with perfmon which is pretty much exactly what you want. It has infinity different counters with different categories - also individual apps can register their own counters.
I worked with Norm at FS Walker Hughes, he made this and published it on CodeProject
What you exacly need is this => processstudio
Both have full source code available.
Enjoy!
Just use NAPI.

Recovery from optical media ignoring read errors

I have backups of files archived in optical media (CDs and DVDs). These all have par2 recovery files, stored on separate media. Even in cases where there are no par2 files, minor errors when reading on one optical drive can be read fine on another drive.
The thing is, when reading faulty media, the read time is very, very long, because devices tend to retry multiple times.
The question is: how can I control the number of retries (ie set to no retries or only one try)? Some system call? A library I can download? Do I have to work on the SCSI layer?
The question is mainly about Linux, but any Win32 pointers will be more than welcome too.
man readom, a program that comes with cdrecord:
-noerror
Do not abort if the high level error checking in readom found an
uncorrectable error in the data stream.
-nocorr
Switch the drive into a mode where it ignores read errors in
data sectors that are a result of uncorrectable ECC/EDC errors
before reading. If readom completes, the error recovery mode of
the drive is switched back to the remembered old mode.
...
retries=#
Set the retry count for high level retries in readom to #. The
default is to do 128 retries which may be too much if you like
to read a CD with many unreadable sectors.
The best tool avaliable is dd_rhelp. Just
dd_rhelp /dev/cdrecorder /home/myself/DVD.img
,take a cup of tea and watch the nice graphics.
The dd_rhelp rpm package info:
dd_rhelp uses ddrescue on your entire disc, and attempts to gather the maximum
valid data before trying for ages on badsectors. If you leave dd_rhelp work
for infinite time, it has a similar effect as a simple dd_rescue. But because
you may not have this infinite time, dd_rhelp jumps over bad sectors and rescue
valid data. In the long run, it parses all your device with dd_rescue.
You can Ctrl-C it whenever you want, and rerun-it at will, dd_rhelp resumes the
job as it depends on the log files dd_rescue creates. In addition, progress
is shown in an ASCII picture of your device being rescued.
I've used it a lot myself and Is very, very realiable.
You can install it from DAG to Red Hat like distributions.
Since dd was suggested, I should note that I know of the existence and have used sg_dd, but my question was not about commands (1) or (1m), but about system calls (2) or libraries (3).
EDIT
Another linux command-line utility that is of help, is sdparm. The following flag seems to disable hardware retries:
sudo sdparm --set=RRC=0 /dev/sr0
where /dev/sr0 is the device for the optical drive in my case.
While checking whether hdparm could modify the number of retries (doesn't seem so), I thought that, depending on the type of error, lowering the CD-ROM speed could potentially reduce the number of read errors, which could actually increase the average read speed. However, if some sectors are completely unreadable, then even lowering the CD-ROM speed won't help.
Since you are asking about driver level access, you should look into SCSI commands, or perhaps an ASPI like API. On windows VSO software (developers of blindread/blindwrite below) have developed a much better API, Patin-Couffin, that provides locked low level access:
http://en.wikipedia.org/wiki/Patin-Couffin
That might get you started. However, at the end of the day, the drive is interfaced with SCSI commands, even if it's actually USB, SATA, ATA, IDE, or otherwise. You might also look up terms related to ATAPI, which was one of the first specifications for this CD-ROM SCSI layer interface.
I'd be surprised if you couldn't find a suitable linux library or example of dealing with the lower level commands using the above search terms and concepts.
Older answer:
Blindread/blindwrite was developed in the heyday of cd-rom protection schemes often using intentionally bad sectors or error information to verify the original CD.
It will allow you to set a whole slew of parameters, including retries. Keep in mind that the CD-ROM drive itself determines how many times to retry, and I'm not sure that this is settable via software for many (most?) CD-ROM drives.
You can copy the disk to ISO format, ignoring the errors, and then use ISO utilities to read the data.
-Adam
Take a look at the ASPI interface. Available on both windows and linux.
dd(1) is your friend.
dd if=/dev/cdrom of=image bs=2352 conv=noerror,notrunc
The drive may still retry a bit, but I don't think you'll get any better without modifying firmware.

Copying Files over an Intermittent Network Connection

I am looking for a robust way to copy files over a Windows network share that is tolerant of intermittent connectivity. The application is often used on wireless, mobile workstations in large hospitals, and I'm assuming connectivity can be lost either momentarily or for several minutes at a time. The files involved are typically about 200KB - 500KB in size. The application is written in VB6 (ugh), but we frequently end up using Windows DLL calls.
Thanks!
I've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across.
I'm unclear as to what your actual problem is, so I'll throw out a few thoughts.
Do you want restartable copies (with such small file sizes, that doesn't seem like it'd be that big of a deal)? If so, look at CopyFileEx with COPYFILERESTARTABLE
Do you want verifiable copies? Sounds like you already have that by verifying hashes.
Do you want better performance? It's going to be tough, as it sounds like you can't run anything on the server. Otherwise, TransmitFile may help.
Do you just want a fire and forget operation? I suppose shelling out to robocopy, or TeraCopy or something would work - but it seems a bit hacky to me.
Do you want to know when the network comes back? IsNetworkAlive has your answer.
Based on what I know so far, I think the following pseudo-code would be my approach:
sourceFile = Compress("*.*");
destFile = "X:\files.zip";
int copyFlags = COPYFILEFAILIFEXISTS | COPYFILERESTARTABLE;
while (CopyFileEx(sourceFile, destFile, null, null, false, copyFlags) == 0) {
do {
// optionally, increment a failed counter to break out at some point
Sleep(1000);
while (!IsNetworkAlive(NETWORKALIVELAN));
}
Compressing the files first saves you the tracking of which files you've successfully copied, and which you need to restart. It should also make the copy go faster (smaller total file size, and larger single file size), at the expense of some CPU power on both sides. A simple batch file can decompress it on the server side.
Try using BITS (Background Intelligent Transfer Service). It's the infrastructure that Windows Update uses, is accessible via the Win32 API, and is built specifically to address this.
It's usually used for application updates, but should work well in any file moving situation.
http://www.codeproject.com/KB/IP/bitsman.aspx
I agree with Robocopy as a solution...thats why the utility is called "Robust File Copy"
I've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across.
And by default, a million retries. That should be plenty for your intermittent connection.
It also does restartable transfers and you can even throttle transfers with a gap between packets assuing you don't want to use all the bandwidth as other programs are using the same connection (/IPG switch)?.
How about simply sending a hash after or before you send the file, and comparing that with the file you received? That should at least make sure you have a correct file.
If you want to go all out you could do the same process, but for small parts of the file. Then when you have all pieces, join them on the receiving end.
You could use Microsoft SyncToy (free).
http://www.microsoft.com/Downloads/details.aspx?familyid=C26EFA36-98E0-4EE9-A7C5-98D0592D8C52&displaylang=en
Hm, seems rsync does it, and does not need server/daemon/install I thought it does - just $ rsync src dst.
SMS if it's available works.

Resources