With xperf I can generate a trace and get a "flat" listing of all files read like so:
xperf -on FileIO+FILE_IO+FILE_IO_INIT+FILENAME -stackwalk FileRead+FileWrite+FileDelete
xperf -start FileIOSession -heap -PidNewProcess "C:\Python27\x86\python.exe scratchy.py" -WaitForNewProcess -BufferSize 1024 -MinBuffers 128 -MaxBuffers 512 -stackwalk HeapAlloc+HeapRealloc -f ./tempheap.etl
xperf -stop FileIOSession -stop -d fileio.etl
xperf -i fileio.etl -o fio_output.txt -a filename
Unfortunately, the fio_output.txt file contains a list of every file imaginable that was accessed (from my web browser, IDE, etc). More frustratingly, if I manually open xperfview and open the File I/O Summary Table, I can see my process (python.exe in this case) and the one file it reads (for test purposes) but can't seem to find a way to output that same data on the CLI which is what I need--an unattended, automated method of generating file access info.
If you want to view this data then you should load the trace into WPA, open the file I/O table, and arrange the columns appropriately. Since you want to group by process you should have the process column first, then the orange bar, and then whatever data columns you want.
If you want to export the data to programmatically parse it then you should use wpaexporter.exe, new in WPT 8.1. See this blog post which I wrote describing how to do this:
https://randomascii.wordpress.com/2013/11/04/exporting-arbitrary-data-from-xperf-etl-files/
Using wpaexporter lets you decide exactly what data columns you want to export instead of being constrained by the limited set of trace processing actions that xperf.exe gives you.
I suspect you can get this data out from tracerpt.exe instead - I'd give that a try
Related
Problem:
Hi everyone, I am currently building an automation suite using Ruby-Selenium Webdriver-Cucumber to load data into the application using it's GUI. I've take input from mainframe .txt files. The scenarios are like to create a customer and then load multiple accounts for them as per the data provided in the inputs.
Current Approach
Execute the scenario using the rake task by passing line number as parameter and the script is executed for only one set of data.
To read the data for a particular line, I'm using below code:
File.readlines("#{file_path}")[line_number.to_i - 1]
My purpose of using line by line loading is to keep the execution running even if a line fails to load.
Shortcomings
Supposed I've to load 10 accounts to a single customer. So my current script will run 10 times to load each account. I want something that can load the accounts in a single go.
What I am looking for
To overcome the above shortcoming, I want to capture the entire data for a single customer from the file like accounts etc and load them into the application in a single execution.
Also, I've to keep track on the execution time and memory allocation as well.
Please provide your thoughts on this approach and any suggestions or improvements are welcomed. (Sorry for the long post)
The first thing I'd do is break this down into steps -- as you said in your comment, but more formally here:
Get the data to apply to all records. Put up a page with the
necessary information (or support command line specification if not
too much?).
For each line in the file, do the following (automated):
Get the web page for inputting its data;
Fill in the fields;
Submit the form
Given this, I'd say the 'for each line' instruction should definitely be reading a line at a time from the file using File.foreach or similar.
Is there anything beyond this that needs to be taken into account?
i have a requirement to monitor the progress of a file being uploaded using script. In putty(software) we can view the Percentage Uplaod, Bytes transferred , Upload Speed and ETA in the right hand side. I want to develop a similar functionality. Is there any way to achieve this?
Your question is lacking any information of how your file is transferred. Most clients have some way to display progress, but that is depending on the individual client used (scp, sftp, ncftp, ...).
But there is a way to monitor progress independently on what is progressing: pv (pipe viewer).
This tool has the sole purpose of generating monitoring information. It can be used in a way much similar to cat. You either use it to "lift" a file to pv's stdout....
pv -petar <file> | ...
...or you use it in the middle of a pipe -- but you need to manually provide the "expected size" in order to get a proper progress bar, since pv cannot determine the size of the transfer beforehand. I used 2 Gigabyte expected size here (-s 2G)...
cat <file> | pv -petar -s 2G | ...
The options used are -p (progress bar), -e (ETA), -t (elapsed time), -a (average rate), and -r (current rate). Together they make for a nice mnemonic.
Other nice options:
-L, which can be used to limit the maximum rate in the pipe (throttle).
-W, to make pv wait until data is actually transferred before showing a progress bar (e.g. if the tool you are piping the data to will require a password first).
This is most likely not what you're looking for (since chances are the transfer client you're using has its own options for showing progress), but it's the one tool I know that could work for virtually any kind of data transfer, and it might help others visiting this question in the future.
I'm trying to look at the Freebase data dump which is stored on a server that I access through ssh. The trouble is I don't know how I can view it in a way that doesn't take forever, make things freeze or crash, I had been trying to view it with nano and it evokes the precisely the behaviour just described.
The operating system is Darwin.
How can I examine this data?
Basically you could use command more or less to scroll over the file. If you know which lines in the file you are interested in, like from line 3000 to 3999, you could show them with sed -n '3000,3999p' your_file_name.
I use htop to view information about the processes currently running in my osx machine, also to sort them by CPU, memory usage, etc.
Is there any way to fetch the output of htop programatically in Ruby?. Also I would like to be able to use the API to sort the processes using various parameters like CPU, memory usage, etc.
I can do IO.popen('ps -a') and parse the output, but want to know if there is a better way than directly parsing the output of a system command run programmatically.
Check out sys-proctable:
require 'sys/proctable'
Sys::ProcTable.ps
To sort by starttime:
Sys::ProcTable.ps.sort_by(&:starttime)
I can't copy the information in softice to disk/file. I am aware of IceExt but everytime I execute the command to dump the screen to disk(such as "!DumpScreen \??\c:\test.raw")it crashes my system entirely. When I try to copy with the mouse, the cursor only makes it possible to copy one line. I have already read through the softice manual. I just need a way to retrieve data from softice. Any help would be appreciated. I am using xp professional.
Turns out that no addons are required to accomplish this. Using the command "u cs:eip L 1000" from softice. You will then see a duplicate of the data within softice's screen displayed in the command window.
The u 'unassembles' code at the address cs:eip (the current EIP), the L specifies a length of 1000 bytes, you might need more than 1000 so adjust accordingly. Once you've done this you should exit SoftICE and select File / Save SoftICE History As from Symbol Loader, with any luck the resulting file will contain your code dump.You may have to to use F10 to step through inorder to get the data in softice's history log.
Using this method, I successfully dumped the entire code window and data window. The main drawback to this method is that the resulting text will contain alot of noise(unnecessary data). I haven't figured out how to get around this. This is a adaptation of woodmann's method.